Download source tarball from github

You can get CAS from git easily:)

1
2
3
git clone https://github.com/apereo/cas.git
git fetch
git checkout 4.2.x

The master branch is not a can-be-compiled version. I got failure information when I run compile commands in master branch:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$./dev-build-no-tests.sh

FAILURE: Build failed with an exception.

* Where:
Build file '/private/tmp/cas/cas-management-webapp/build.gradle' line: 28

* What went wrong:
A problem occurred evaluating project ':cas-management-webapp'.
> No such property: java for class: org.gradle.api.java.archives.internal.DefaultManifest

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 11.67 secs

The stable version version in cas release page is 4.2.2. So just checkout out the stable version.

compile

get gradle

CAS can be compiled by gradle, when you first run gradlew(gradle wrapper), the project will download gradle-XXX-bin.zip. If you already have it or you have some problems downloading it(such as GFW), you can simply edit `{CAS_PWD}/gradle/wrapper/gradle-wrapper.properties” like below:

1
2
3
4
5
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-2.13-all.zip

Replace distributionUrl with your own url.

If you have local gradle zip file, use the zip file name directly and put zip file into {CAS_PWD}/gradle/wrapper. For example if my gradle zip file is gradle-2.10-bin.zip, you need parameter distributionUrl like this:

1
2
...
distributionUtl=gradle-2.10-bin.zip

compile cas project

There is two bootstrap shell in cas directory, dev-build.sh and dev-build-no-tests.sh. The difference between them is obviously shown in their names - dev-build-no-tests.sh will set this flag -DskipAspectJ=true

When you run dev-build.sh or dev-build-no-tests.sh for a couple of minutes, you will probably get this conflicts failure.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
:cas-server-support-pac4j:compileAspect
Download ...
> :cas-server-support-pac4j:compileAspecFAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':cas-server-support-pac4j:compileAspect'.
> Could not resolve all dependencies for configuration ':cas-server-support-pac4j:compile'.
> A conflict was found between the following modules:
- commons-io:commons-io:2.4
- commons-io:commons-io:2.5

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 10 mins 48.643 secs

It is casuse by the conflict between commons-io:2.4 and commons-io:2.5 in cas-server-support-pac4j, we need to add -DskipVersionConflict=true in bootstrap scripts.

If you build it in IntelliJ IDEA, set this parameter in Preferences-Build,Execution,Deployment-Build Tools-Gradle-Gradle VM options.

Plus, if you want to use http proxy in gradle, you need to set these parameters:

1
-DproxySet=true -DproxyHost=127.0.0.1 -DproxyPort=8123

For IDE

We need to execute extra command for IDE

IDEA

1
./gradlew idea

Eclipse

1
./gradlew eclipse

To Be Contined…w

From fasouto github

Awesome dataviz Awesome

A curated list of awesome data visualizations frameworks, libraries and software. Inspired by awesome-python.

Table of contents

JavaScript tools

Charting libraries

  • C3 - a D3-based reusable chart library.
  • Chart.js - Charts with the canvas tag.
  • Chartist.js - Responsive charts with great browser compatibility.
  • Dimple - An object-oriented API for business analytics.
  • Dygraphs - Interactive line charts library that works with huge datasets.
  • Echarts - Highly customizable and interactive charts ready for big datasets.
  • Epoch - Perfect to create real-time charts.
  • Highcharts - A charting library based on SVG and VML rendering. Free (CC BY-NC for non-profit projects.
  • MetricsGraphics.js - Optimized for time-series data.
  • Morris.js - Pretty time-series line graphs.
  • NVD3 - A reusable charting library written in d3.js.
  • Peity - Create small inline svg charts.
  • Plotly.js - Powerful declarative library with support for 20 chart types.
  • TechanJS - Stock and financial charts.

Charting libraries for graphs

  • Cola.js - A tool to create diagrams using constraint-based optimization techniques. Works with d3 and svg.js.
  • Cytoscape.js - JavaScript library for graph drawing maintained by Cytoscape core developers.
  • Linkurious - A toolkit to speed up the development of graph visualization and interaction applications. Based on Sigma.js.
  • Sigma.js - JavaScript library dedicated to graph drawing.
  • VivaGraph - Graph drawing library for JavaScript.

Maps

  • CartoDB - CartoDB is an open source tool that allows for the storage and visualization of geospatial data on the web.
  • Cesium - WebGL virtual globe and map engine.
  • Leaflet - JavaScript library for mobile-friendly interactive maps.
  • Leaflet Data Visualization Framework - A framework designed to simplify data visualization and thematic mapping using Leaflet.
  • Mapael - jQuery plugin based on the.js to display vector maps.
  • Mapsense.js - Combines d3.js with tile maps.
  • Modest Maps - BSD-licensed display and interaction library for tile-based maps in Javascript.

d3

dc.js

dc.js is an multi-Dimensional charting built to work natively with crossfilter.

Misc

  • Chroma.js - A small library for color manipulation.
  • Piecon - Pie charts in your favicon.
  • Recline.js - Simple but powerful library for building data applications in pure JavaScript and HTML.
  • Textures.js - A library to create SVG patterns.
  • Timeline.js - Create interactive timelines.
  • Vega - Vega is a visualization grammar, a declarative format for creating, saving, and sharing interactive visualization designs.
  • Vis.js - A dynamic visualization library including timeline, networks and graphs (2D and 3D).

Android tools

  • HelloCharts - Charting library for Android compatible with API 8+.
  • MPAndroidChart - A powerful & easy to use chart library.

C++ tools

Golang tools

  • Charts for Go - Basic charts in Go. Can render to ASCII, SVG and images.
  • svgo - Go Language Library for SVG generation.

iOS tools

  • JBChartView - Charting library for both line and bar graphs.
  • PNChart - A simple and beautiful chart lib used in Piner and CoinsMan.
  • ios-charts - iOS port of MPAndroidChart. You can create charts for both platforms with very similar code.

Python tools

  • bokeh - Interactive Web Plotting for Python.
  • ggplot - Same API as ggplot2 for R.
  • glumpy - OpenGL scientific visualizations library.
  • matplotlib - 2D plotting library.
  • pygal - A dynamic SVG charting library.
  • PyQtGraph - Interactive and realtime 2D/3D/Image plotting and science/engineering widgets.
  • seaborn - A library for making attractive and informative statistical graphics.
  • toyplot - The kid-sized plotting toolkit for Python with grownup-sized goals.
  • Vincent - A Python to Vega translator.
  • VisPy - High-performance scientific visualization based on OpenGL.
  • mpld3 - D3 Renderings of Matplotlib Graphics

R tools

  • ggplot2 - A plotting system based on the grammar of graphics.
  • lattice - trellis graphics for R
  • plotly - Interactive charts (including adding interactivity to ggplot2 output), cartograms and simple network diagrams
  • rbokeh - R Interface to Bokeh.
  • rgl - 3D Visualization Using OpenGL
  • shiny - Framework for creating interactive applications/visualisations
  • visNetwork - Interactive network visualisations

Ruby tools

  • Chartkick - Create charts with one line of Ruby.

Other tools

Tools that are not tied to a particular platform or language.

  • Charted - A charting tool that produces automatic, shareable charts from any data file.
  • Gephi - An open-source platform for visualizing and manipulating large graphs
  • Lightning - A data-visualization server providing API-based access to reproducible, web-based, interactive visualizations.
  • RAW - Create web visualizations from CSV or Excel files.
  • Spark - Sparklines for the shell. It have several implementations in different languages.
  • Periscope - Create charts directly from SQL queries.

Resources

Books

Twitter accounts

Websites

Contributing

  • Please check for duplicates first.
  • Keep descriptions short, simple and unbiased.
  • Please make an individual commit for each suggestion
  • Add a new category if needed.

Thanks for your suggestions!

License

CC0

To the extent possible under law, Fabio Souto has waived all copyright and related or neighboring rights to this work.

‘@’ in Crontab

‘@’ sysmbol is available in crontab and it very useful especially for @reboot. @reboot makes it possible to run a script while machine starts.

1
2
3
4
5
6
7
8
9
10
11
12
13
@reboot     -   This runs the Cron job when the machine is started up or if the Cron daemon is restarted

@midnight - This runs the Cron job once a day at midnight, it's the equivalent of 0 0 * * *

@daily - Does exactly the same as @midnight

@weekly - This runs a Cron job once a week on a Sunday the equivalent of 0 0 * * 0

@monthly - This runs a Cron job once a month on the first day of every month at midnight and is the same as 0 0 1 * *

@annually - Runs a Cron job once a year at midnight on the first day of the first month and is the equivalent of 0 0 1 1 *

@yearly - The same as annually

An example for run a command when the machine starts:

1
@reboot date >> /tmp/date.log

Intro

Python is a very powerful and useful tool in our daily work especially for DevOps. Sending mail is also a very simple need. I’ve wrote serveral scripts for sending mail because I could not find the scripts I made before. This time I wanna get the things done - made is open on Github with GPLv3.

Code

Project address is here: python-sendmail on Github.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.

# -*- coding:utf-8 -*-

import argparse
import smtplib, sys
from email.MIMEText import MIMEText

# you can change this constant to parameters of sendmail
USR_FROM = 'me@yaorenjie.com'
USR = 'me'
PASSWD = 'HELLO_WORLD'

SMTP_SERVER = 'email.yaorenjie.com'
SMTP_PORT = 587

def sendmail(usr_to, subject, msg, subtype='html'):
usr_from = USR_FROM
msg = MIMEText(msg, subtype, _charset='utf-8')
msg['Subject'] = subject
msg['To'] = ';'.join(usr_to)
msg['From'] = usr_from
server = smtplib.SMTP()
server.connect(SMTP_SERVER, SMTP_PORT)
# ONLY for debug
# server.set_debuglevel(1)
# if exchange use STARTTLS auth, you need to uncomment next line
# server.starttls()
server.login(USR, PASSWD)
server.sendmail(usr_from, usr_to, msg.as_string())
server.quit()


if __name__ == '__main__':
"""
Example:
python sendmail.py --usr_to mike@mail.com peter@mail.com --subject 'TEST_SUBJECT' --msg 'TEST_MSG'

Remark:
1. msg supports HTML and Chinese
"""

parser = argparse.ArgumentParser(description='parser for senemail')
parser.add_argument('--usr_to', dest='usr_to', nargs='+', required=True)
parser.add_argument('--subject', dest='subject', required=True)
parser.add_argument('--msg', dest='msg', required=True)
args = parser.parse_args()
sendmail(usr_to=args.usr_to, subject=args.subject, msg=args.msg)

We usually use ShadowsocksX on Mac to build a SOCKS5 tunnel. But as a developer, I need HTTP Proxy in many places such as npm or some IDE tools. I need a tool to build a HTTP Proxy and pack data stream to SOCKS5 tunnel.

I found a solution of using a tool called polipo. We can simply install this tool by homebrew.

1
brew instasll polipo

After installed polipo, let’s make it work by a simple command.

1
polipo socksParentProxy=localhost:1080

localhost:1080 is your local SOCKS5 address. In default, HTTP proxy created by polipo is running on localhost:8123.

Colletions

Collections is a very useful data structure colletion in Python. In this post, I will introduct

collections.Couter()

Counter is used to count something of iterable data structure such as list and tuple. Let’s see example below:

1
2
3
4
5
6
7
8
9
from collections import Counter
name_list = ['frank', 'frank', 'tony', 'jack', 'judy', 'jack', 'jack']
name_counter = Counter(name_list)
print counter # print couter of name_list
# Counter({'jack': 3, 'frank': 2, 'tony': 1, 'judy': 1})
print counter.most_common(1) # print top 1 count
# [('jack', 3)]
print counter.most_common(2) # print top 2 count
# [('jack', 3), ('frank', 2)]

Now I am going to update a Counter:

1
2
3
4
5
6
7
8
9
10
11
from collections import Counter
name_list_one = ['frank', 'frank', 'tony']
name_counter = Counter(name_list_one)
# name_counter: Counter({'frank': 2, 'tony': 1})
name_list_two = ['tony', 'jack']
name_counter.update(name_list_two)
# name_counter: Counter({'tony': 2, 'frank': 2, 'jack': 1})
name_counter.subtract(['frank']) # subtract is the reverse operation of update
# name_counter: Counter({'tony': 2, 'frank': 1, 'jack': 1})
name_counter.subtract(['jack', 'jack']) # value can be zero and negative counts
# name_counter: Counter({'tony': 2, 'frank': 1, 'jack': -1})

collections.defaultdict()

defaultdict is almost the same as dict which can be assigned by this statement:

1
d = {'name': 'frank'}

defaultdict provides a function which will be invoked when we use a non-exists key to access the value in the defaultdict. When we want to get the value by a key which does not exist in the dict, dict will raise a KeyError Exception. We can use d.get(KEY, DEFAULT_VALUE) to define the value we will get if the key does not exist. It is okay but not an easy work-around. For example, I have a dict named writer_book_dict and its key is writers’ name and value is books’ name. When I get the value by a non-exist writer, it should return ‘NO BOOKS’. Let’s see how we can deal it with dict and defaultdict.

1
2
3
4
5
6
7
writer_book_dict = {'frank': 'book_1', 'tony': 'book_2'}
book_by_jack = writer_book_dict.get('jack', 'NO BOOKS')
book_by_jack = writer_book_dict.get('judy', 'NO BOOKS')

from collections import defaultdict
writer_book_defaultdict = default(lamda: 'NO BOOKS')
writer_book_defaultdict.get('jack') # return 'NO BOOKS'

We can use a complex function in initialize the defaultdict. Furthermore, we can also define the data structure of the value in the defaultdict. Let’s see an example, we need to make a classification of a short paragraph by the first letter of each world. It is to say, ‘I am a student and I like sports’ -> {'a': ['am', 'a', 'and'], 'i': ['it', 'is', 'i'], 'l': ['like'], 's': ['student', 'sports']}. If we use default we must judge if the key exists and if it does not exist, we need to initialize with an empty list like below:

1
2
3
4
5
6
7
8
paragraph = 'I am a student and I like sports'
word_dict = {}
for word in paragraph:
first_letter = word[0]
if word not in word_dict:
word_dict[first_letter] = [word]
else:
word_dict[first_letter].append(word)

Let’s see how defaultdict works:

1
2
3
4
5
from collections import defaultdict
paragraph = 'I am a student and I like sports'
word_defaultdict = defaultdict(list)
for word in paragraph:
word_defaultdict[word[0]].append(word)

If we want to uniq the element of word_defaultdict, we can simply initialize word_defaultdict by defaultdict(set).

bisect

bisect is used to insert element into an sorted list and keep the list sorted. bisect module have six methods:

  1. bisect
  2. bisect_left
  3. bisect_right
  4. insort
  5. insort_left
  6. insort_right

Methods start with bisect will return the index of the element you need to insert. Those start with insort will make change into list directly and return nothing. The difference between left and right will affect the result of operation only when the element you want to insert does exist in the list and left will return the index left of this existed-element and right return the right position. Please remark that bisect is the same as bisect_right and insort is the same as insort_right. Let’s see some examples.

1
2
3
4
5
sorted_list = [1, 10, 100, 1000]
bisect(sorted_list, 20) # return 2
bisect(sorted_list, 10) # return 2
bisect_left(sorted_list, 10) # return 1
bisect_right(sorted_list, 10) # return 1

Solution

1
2
3
4
5
6
7
8
9
10
11
12
from bottle import Bottle, request, response, run
app = Bottle()

@app.hook('after_request')
def enable_cors():
"""
You need to add some headers to each request.
Don't use the wildcard '*' for Access-Control-Allow-Origin in production.
"""

response.headers['Access-Control-Allow-Origin'] = '*'
response.headers['Access-Control-Allow-Methods'] = 'PUT, GET, POST, DELETE, OPTIONS'
response.headers['Access-Control-Allow-Headers'] = 'Origin, Accept, Content-Type, X-Requested-With, X-CSRF-Token'

Kafka Compression Performance Tests

Backgroud

Kafka use End-to-End compression model which means that Producer and Consumer are doing the compression and de-compression jobs. This feature enables the reduction of on-the-fly network costs and the Broker will increase its cpu load.

Environment

Hardware Box

CPU Memory Disk
2.5 GHz Intel Core i7 16GB 512GB SSD

Software Box

Kafka JDK Scala Broker Producer JVM
0.8.2.1 1.7.0u75 2.11 1 1 -Xms4G -Xmx4G -Xmn2G

Kafka Configuration

Replica Partition
1 1

Messages Content

The content I used to send to Kafka is a nginx log which contains 607,781 lines and 200MB. Each line is like below:

1
127.0.0.1 - - [24/Mar/2015:15:57:09 +0800] "GET /login?gotype=2 HTTP/1.1" "0.002" 200 3177 "http://abc.com/URLhtml" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET4.0C; .NET4.0E)" 127.0.0.1 foo.com foo-hostname
1
2
3
4
5
$wc -l passport.access.log
607781 passport.access.log
~/Downloads
$du -sh passport.access.log
200M passport.access.log

Baseline Test

Kafka Producer Configuration

compression.type buffered.memory acks linger.ms
None 32MB 1 0

Test Result

The first test is going to figure out the fact that the value we defined in batch.size is not the exactly batch size Producer will use. I got the real batch size of Producer by using its metrics API. Tests have been done with batch.size of 200, 500, 1000, 1200, 1500, 2000. Chart below is the result:

Alt text

We can see clearly that batch.size is a smaller than batch-size-avg(it is the excatly batch size Producer used). And while the batch.size is 200 and 500, the batch-size-avg is almost the same - aroud 370. That is to say, batch-size-avg is not only settled by the parameter batch.size, furthermore, it is also related to other factor.

All tests have been done in different batch.size of 500, 1000, 1200, 1500 and 2000. Why I didn’t use batch.size of 200 because the phenomenon in the above paragraph.

Let’s take a look on the Producer baseline:

Alt text

The throughput rised along with the increasing of batch.size. And the latency rised as well. Let me make a simple math calculation. I use the batch.size of 500 for the base number and get the percentage of throughput and latency increasement.

- 500 1000 1200 1500 2000
throughtput 100% 183.25% 230.74% 289.59% 387.81%
latency 100% 117.47% 121.40% 123.76% 128.31%

Alt text

Conclusion

The throughtput of Producer can get much higher with the increasing of batch.size. And at the meantime, latency will increas in a very tiny level.

Producer Performance with Compression

Kafka support three different compression type:

  • gzip
  • snappy
  • lz4

Because of the box I use to test, when I use lz4 in testing, I got OOM exception. I think the casuse of this exception is the Java code will get all lines of the passport.access.log and send to Kafka in a very short time. So I used batch.size of 2000, 2500, 3000, 3500, 4000, 4500 and 5000. Using big batch.size will increass the latency and this will give JVM some time to do the GC jobs.

Below is the result

Compression Rate

The value is better when it is smaller

- none gzip snappy lz4
Average Compression Rate 100% 19.21% 78.19% 31.37%

Alt text

Throughput

When I enabled the compression, the througput decreased. Let’s see the result.

- none gzip snappy lz4
Average Throughtput 151901.1038 39346.00017 119707.5266 191469.8994

Alt text

Latency

Result:

- none gzip snappy lz4
Average Latency ms 0.25 4.97 0.41 0.66
Ratio 100.00% 1937.61% 159.99% 258.62%

Alt text

Consumer Performance with Compression

Because Kafka 0.8.2.1 has not enabled the new Consumer, so I cannot get the detailed metrics like I have got in Producer. So in this test, I use the total time Consumer used to consumes a fixed number of messages to do the benchmark.

The messages are test.access.log in twice size. Below is the result

- none gzip snappy lz4
Time Cost ms 3218 5374 5216 4507
Time Cost Increase Rate 100% 167% 162.09% 140.06%

Alt text

We can see lz4 is the fastest.

Conclusion

lz4 is the best choice in compression rate and both performance of Producer and Consumer. Below is the comparison of lz4 performance and base line performance. For simplified the chart, this only shows the result with batch.size of 5000.

Alt text

Intro

history and trends are two main tables in Zabbix database for storing data. There are also tables inherits history which are used to store different types of data. For example, table history_uint is to store uint data. Others are history_str, history_log and history_text. Please pay attention that there is only one table inherits table trends and its name is table_uint.

Entrance

The start of the whole process is in function DCsync_all(), src/libs/zbxdbcache/dbcache.c. dbcache is a feature which enables Zabbix to keep data in memory first and flush to database in batch.

To make it more clearly, I wrote comments inline.

1
2
3
4
5
6
7
8
9
10
11
12
static void	DCsync_all()
{

zabbix_log(LOG_LEVEL_DEBUG, "In DCsync_all()");

// call `DCsync_history` to sync history data from cache to database
DCsync_history(ZBX_SYNC_FULL);
// assert code is running in Zabbix Server. Zabbix Server WILL NOT run codes below
if (0 != (daemon_type & ZBX_DAEMON_TYPE_SERVER))
// sync trends data.
DCsync_trends();
zabbix_log(LOG_LEVEL_DEBUG, "End of DCsync_all()");
}

DCsync_history()

Since we are talking about Zabbix Server, so we will only take a look at DCsync_history(). In this paragraph, I have simplified the code in DCsync_history() to help readers understand the core process of flushing data to database.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
DBbegin();

// while in Zabbix Server mode
if (0 != (daemon_type & ZBX_DAEMON_TYPE_SERVER))
{
...
// add data to history
DCmass_add_history(history, history_num);
// update trends
DCmass_update_trends(history, history_num);
...
}
else
{
DCmass_proxy_add_history(history, history_num);
...
}
DBcommit();

DCmass_add_history()

Let’s take a look into DCmass_add_history().

It will calculate the numbers for each type of items.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
switch (history[i].value_type)
{
case ITEM_VALUE_TYPE_FLOAT:
h_num++;
break;
case ITEM_VALUE_TYPE_UINT64:
huint_num++;
break;
case ITEM_VALUE_TYPE_STR:
hstr_num++;
break;
case ITEM_VALUE_TYPE_TEXT:
htext_num++;
break;
case ITEM_VALUE_TYPE_LOG:
hlog_num++;
break;
}
}

And then, call function to write data to database.

1
2
3
4
5
6
/* history */
if (0 != h_num)
dc_add_history_dbl(history, history_num);
/* history_uint */
if (0 != huint_num)
dc_add_history_uint(history, history_num);

Finally, we can touch the sql in dc_add_history_dbl().

1
2
3
4
5
6
7
8
9
10
11
static void	dc_add_history_dbl(ZBX_DC_HISTORY *history, int history_num)
{

...
for (i = 0; i < history_num; i++)
{
...
zbx_db_insert_add_values(&db_insert, history[i].itemid, history[i].ts.sec, history[i].ts.ns,
history[i].value.dbl);
}
...
}

DCmass_update_trends

Let’s see the code first.

1
2
3
4
5
6
7
8
9
10
11
12
13
static void	DCmass_update_trends(ZBX_DC_HISTORY *history, int history_num)
{

for (i = 0; i < history_num; i++)
{
...
DCadd_trend(&history[i], &trends, &trends_alloc, &trends_num);
}
...
while (0 < trends_num)
// flush trends while we actually HAVE trends data to flush
DCflush_trends(trends, &trends_num, 1);
...
}

We get that the main process is in the function DCadd_trend(). Let’s move on to it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
static void	DCadd_trend(ZBX_DC_HISTORY *history, ZBX_DC_TREND **trends, int *trends_alloc, int *trends_num)
{

...
// get trend data from database by itemid
trend = DCget_trend(history->itemid);
...
switch (trend->value_type)
{
case ITEM_VALUE_TYPE_FLOAT:
...
// calculate the new data
trend->value_avg.dbl = (trend->num * trend->value_avg.dbl
+ history->value.dbl) / (trend->num + 1);
break;
...
}

Above all, I think we are clear on the fact how Zabbix update history and trends.

Below is the process:
Alt text

Backgroud

configure is okay, the next step is make install. But make install exist with error which said it cannot find aclocal-1.14 like below:

1
2
3
4
5
6
7
8
9
10
11
12
13
[baniuyao@YaoRenjies-CentOS zabbix-2.4.4]$ sudo make install
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh /home/baniuyao/apps/zabbix-2.4.4/missing aclocal-1.14 -I m4
/home/baniuyao/apps/zabbix-2.4.4/missing: line 81: aclocal-1.14: command not found
WARNING: 'aclocal-1.14' is missing on your system.
You should only need it if you modified 'acinclude.m4' or
'configure.ac' or m4 files included by 'configure.ac'.
The 'aclocal' program is part of the GNU Automake package:
<http://www.gnu.org/software/automake>
It also requires GNU Autoconf, GNU m4 and Perl in order to run:
<http://www.gnu.org/software/autoconf>
<http://www.gnu.org/software/m4/>
<http://www.perl.org/>
make: *** [aclocal.m4] Error 127

But the system has aclocal-1.14 already:

1
2
[baniuyao@YaoRenjies-CentOS ~]$ which aclocal-1.14
/usr/local/bin/aclocal-1.14

Solution

This is because we don’t have some autoconf(.ac) and automake(.am) files. We can touch these.

1
touch configure.ac aclocal.m4 configure Makefile.am Makefile.in

Or we can use autoreconf to generate.

1
sudo autoreconf -ivf

Let’s figure out what -ivf means.

1
2
3
-v, --verbose            verbosely report processing
-f, --force consider all files obsolete
-i, --install copy missing auxiliary files