Add documentation bits.
Provide sensible default for configuration values
This reorders arguments a bit:
- The query is expected to be the block argument
- The type instance is inferred from the query if unsupplied
- The type will default to gauge if not supplied
This reorders arguments a bit:
- The query is expected to be the block argument
- The type instance is inferred from the query if unsupplied
- The type will default to gauge if not supplied
use consistent naming for arguments
rename command to query
add support for custom commands in redis plugin
Now that the redis plugin has moved to hiredis, it could
be worthwhile to add support for custom commands.
This diff implements a mechanism for executing commands which
allows for setting the type and type-instance. It doesn not
support hash or array returns, but if this is deemed necessary
could be added later on.
The canonical use case for this is for people using redis
has a queue (for instance, using solutions such as rq,
sidekiq and similar solutions) who want a simple way to
ensure the work queue size is not growing. To address this
you would use:
```
<Plugin redis>
<Node local>
<Command "queue_length">
Exec "LLEN myqueue"
Instance "myqueue"
</Command>
</Node>
</Plugin>
```
This would then produce a redis-local/queue_length-myqueue value.
If the idea has traction I'll add the doc bits.
Now that the redis plugin has moved to hiredis, it could
be worthwhile to add support for custom commands.
This diff implements a mechanism for executing commands which
allows for setting the type and type-instance. It doesn not
support hash or array returns, but if this is deemed necessary
could be added later on.
The canonical use case for this is for people using redis
has a queue (for instance, using solutions such as rq,
sidekiq and similar solutions) who want a simple way to
ensure the work queue size is not growing. To address this
you would use:
```
<Plugin redis>
<Node local>
<Command "queue_length">
Exec "LLEN myqueue"
Instance "myqueue"
</Command>
</Node>
</Plugin>
```
This would then produce a redis-local/queue_length-myqueue value.
If the idea has traction I'll add the doc bits.
Support the switch from credis to hiredis
Conflicts:
contrib/redhat/collectd.spec
Conflicts:
contrib/redhat/collectd.spec
Merge pull request #759 from mschenck/add-linux-io-time
Add linux I/O time
Add linux I/O time
Merge pull request #783 from mfournier/varnish4
Add support for varnish 4.x
Add support for varnish 4.x
Merge pull request #799 from mfournier/hiredis-switch
Switch redis & write_redis plugins from credis to hiredis
Switch redis & write_redis plugins from credis to hiredis
Avoid reintroducing #610, updates the fix to #804
We might as well mess with avoid freeing the req pointer
only when failures occur, otherwise perform as before
We might as well mess with avoid freeing the req pointer
only when failures occur, otherwise perform as before
Merge pull request #814 from mfournier/upstart-systemd-examples
upstart and systemd doc & examples
upstart and systemd doc & examples
Merge pull request #802 from ccin2p3/faxm0dem/cpu-ticks-percentage
allow for 'ReportByCpu false' and 'ValuesPercentage false'
allow for 'ReportByCpu false' and 'ValuesPercentage false'
Let snmp_synch_response deal with PDU freeing
When reading from tables, upon errors the PDUs sent are already
freed by snmp_synch_response since they are right after
snmp_send is called.
This commit syncs collectd's approach with other occurences of
snmp_synch_response calls.
There might be a few corner cases where we leak PDUs, but it
is unclear how to check for those since we would need to
have an indication that snmp_send was never called, which
as far as I can tell is not possible.
The potential for failure in snmp_send is rather low and will
be easily spotted though, since when crafting invalid PDUs
snmp send will constantly fail and since valid configurations
can never leak memory.
This fixes #804
When reading from tables, upon errors the PDUs sent are already
freed by snmp_synch_response since they are right after
snmp_send is called.
This commit syncs collectd's approach with other occurences of
snmp_synch_response calls.
There might be a few corner cases where we leak PDUs, but it
is unclear how to check for those since we would need to
have an indication that snmp_send was never called, which
as far as I can tell is not possible.
The potential for failure in snmp_send is rather low and will
be easily spotted though, since when crafting invalid PDUs
snmp send will constantly fail and since valid configurations
can never leak memory.
This fixes #804
collectd(1): mention upstart & systemd support
amend comments in upstart config file + disable "console log"
Hopefully avoid some confusion for RHEL6 users which have an old upstart
version.
Hopefully avoid some confusion for RHEL6 users which have an old upstart
version.
add example systemd service file
Making use of systemd socket notification feature added in ff270e6d5.
Making use of systemd socket notification feature added in ff270e6d5.
Merge pull request #811 from mfournier/systemd-upstart-build-issue
prevent going through systemd/upstart code, except on Linux
prevent going through systemd/upstart code, except on Linux
Merge pull request #810 from njh/mac_battery_read_return
Added return (0) to the Mac/IOKit variant of battery_read()
Added return (0) to the Mac/IOKit variant of battery_read()
prevent going through systemd/upstart code, except on Linux
Fixes #809 (build issue on MacOSX)
NB: in case one day upstart is used on non-Linux platforms, this could
be relaxed to only skip systemd.
Fixes #809 (build issue on MacOSX)
NB: in case one day upstart is used on non-Linux platforms, this could
be relaxed to only skip systemd.
Added return(0) to the Mac/IOKit variant of battery_read()
Merge pull request #806 from vincentbernat/fix/libstatgrab2
libstatgrab: only use one configure test for 0.90 API change
libstatgrab: only use one configure test for 0.90 API change
Merge pull request #808 from landryb/openbsd_build_fixes_2
Openbsd build fixes 2
Openbsd build fixes 2
Detect sys/vmmeter.h and include it if available.
Needed on OpenBSD for struct vmtotal definition.
Needed on OpenBSD for struct vmtotal definition.
libstatgrab: only use one configure test for 0.90 API change
Previously, each API change was tested in configure.ac. Some of the
tests are relying on signature checks and would need to have -Werror
flag enabled to make them work. This is quite fragile.
Instead, we assume that if `sg_init()` requires an argument, we must use
the 0.90 API.
Fixes: #795
Previously, each API change was tested in configure.ac. Some of the
tests are relying on signature checks and would need to have -Werror
flag enabled to make them work. This is quite fragile.
Instead, we assume that if `sg_init()` requires an argument, we must use
the 0.90 API.
Fixes: #795
RPM specfile: add support for smart & openldap plugins
add credits for new plugins
smart: mention support lib in README
adapt contrib script
Merge pull request #719 from mfournier/openldap-improvements-rebased
Openldap plugin
Openldap plugin
Merge pull request #795 from vincentbernat/fix/libstatgrab
libstatgrab: fix sg_init() invocation for libstatgrab >= 0.9
libstatgrab: fix sg_init() invocation for libstatgrab >= 0.9
Merge pull request #797 from vincentbernat/feature/libatasmart
smart: add a SMART plugin
smart: add a SMART plugin
Merge pull request #800 from pyr/feature/riemann-batch
Add a batching mechanism for riemann TCP writes
Add a batching mechanism for riemann TCP writes
Merge pull request #798 from pyr/feature/upstart-job
Support both systemd and upstart.
Support both systemd and upstart.
write_redis: avoid passing a float/double to redisCommand()
... as it seems to not be well supported by hiredis 0.10.1 on Debian
7.0, leading to a segfault. Storing the string representation in a
variable instead is the compromise I found to make the plugin work on
this system.
... as it seems to not be well supported by hiredis 0.10.1 on Debian
7.0, leading to a segfault. Storing the string representation in a
variable instead is the compromise I found to make the plugin work on
this system.
libstatgrab: fix sg_get_disk_io_stats() invocation for libstatgrab >= 0.9
In those versions, `sg_get_disk_io_stats()` need to be invoked a pointer
to size_t instead of pointer to int. Such a requirement is detected at
configure-time.
Fixes: #445
In those versions, `sg_get_disk_io_stats()` need to be invoked a pointer
to size_t instead of pointer to int. Such a requirement is detected at
configure-time.
Fixes: #445
openldap: add mention in README
openldap: relicence to MIT
... with Kimo's agreement. Also add myself to copyright holders.
... with Kimo's agreement. Also add myself to copyright holders.
smart: when threshold is valid, also test for "less or equal"
When the threshold is 0, a value of 0 should hit the threshold.
When the threshold is 0, a value of 0 should hit the threshold.
libstatgrab: fix sg_get_user_stats() invocation for libstatgrab >= 0.9
In those versions, `sg_get_user_stats()` need to be invoked with an
additional argument. The need for such an argument is detected at
configure-time.
Fixes: #445
In those versions, `sg_get_user_stats()` need to be invoked with an
additional argument. The need for such an argument is detected at
configure-time.
Fixes: #445
libstatgrab: fix sg_init() invocation for libstatgrab >= 0.9
In those versions, `sg_init()` need to be invoked with an additional
argument. The need for such an argument is detected at configure-time.
Fixes: #445
In those versions, `sg_init()` need to be invoked with an additional
argument. The need for such an argument is detected at configure-time.
Fixes: #445
Merge pull request #803 from bnordbo/aggregation-libm
Link aggregation.so to libm.so
Link aggregation.so to libm.so
Link aggregation.so to libm.so
indentation
Change-Id: I0201ac6e3c6e3c9bfcf55b74df6f13b9d961a90e
Change-Id: I0201ac6e3c6e3c9bfcf55b74df6f13b9d961a90e
allow for 'ReportByCpu false' and 'ValuesPercentage false'
this will allow for aggregating total cpu values while keeping derives
(ticks)
Change-Id: Ic22a1b52a5897c18398fa25095a0f3ebcc403ee1
this will allow for aggregating total cpu values while keeping derives
(ticks)
Change-Id: Ic22a1b52a5897c18398fa25095a0f3ebcc403ee1
Add a batching mechanism for riemann TCP writes
This does not batch notifications.
This does not batch notifications.
Merge pull request #793 from vincentbernat/fix/gcrypt-deprecated
network: don't enable gcrypt thread callbacks when gcrypt recent enough
network: don't enable gcrypt thread callbacks when gcrypt recent enough
Merge pull request #792 from vincentbernat/fix/out-of-tree-build
build: fix out-of-tree build
build: fix out-of-tree build
Support both systemd and upstart.
This checks appropriate environment variables. When supervised
by either upstart or systemd, collectd will not daemonize but
will signal readyness with the appropriate method.
This allows collectd to be either configured with `expect stop`
in upstart or `Type=notify` with systemd.
The rationale for this is detailed here: http://spootnik.org/entries/2014/11/09_pid-tracking-in-modern-init-systems.html
This checks appropriate environment variables. When supervised
by either upstart or systemd, collectd will not daemonize but
will signal readyness with the appropriate method.
This allows collectd to be either configured with `expect stop`
in upstart or `Type=notify` with systemd.
The rationale for this is detailed here: http://spootnik.org/entries/2014/11/09_pid-tracking-in-modern-init-systems.html
smart: add notifications when a value is below the threshold
smart: add a SMART plugin
This plugin uses libatasmart:
http://0pointer.de/blog/projects/being-smart.html
As libatasmart is Linux-only, the plugin is therefore Linux-only
too. The disks are discovered through libudev.
Each SMART attribute is extracted. The current value, worst value,
threshold value (if any) are recorded. Those are normalized
values (between 0 and 255, higher is better). For some values, it makes
more sense to record the raw value. libatasmart is converting this raw
value to something sensible. We record that form. Sometimes, this is
just the raw value but sometimes this is converted to another scale (for
example, the temperature). People should know what each attribute means
before using those values. Otherwise, the normalized values are better.
Four values are (power-on time, power cycle count, bad sectors and
temperature) are also recorded on their own. Those are usually the
values that the user care about the most.
Here is an excerpt of the plugin output with the CSV plugin (the SSD
disk on my laptop doesn't provide a temperature sensor):
.
└── zoro.exoscale.ch
└── smart-sda
├── smart_attribute-attribute-173-2014-11-10
├── smart_attribute-attribute-174-2014-11-10
├── smart_attribute-available-reserved-space-2014-11-10
├── smart_attribute-end-to-end-error-2014-11-10
├── smart_attribute-erase-fail-count-2014-11-10
├── smart_attribute-power-cycle-count-2014-11-10
├── smart_attribute-power-on-hours-2014-11-10
├── smart_attribute-power-on-seconds-2-2014-11-10
├── smart_attribute-program-fail-count-2014-11-10
├── smart_attribute-reallocated-sector-count-2014-11-10
├── smart_attribute-reported-uncorrect-2014-11-10
├── smart_attribute-total-lbas-read-2014-11-10
├── smart_attribute-total-lbas-written-2014-11-10
├── smart_attribute-udma-crc-error-count-2014-11-10
├── smart_attribute-unused-reserved-blocks-2014-11-10
├── smart_attribute-used-reserved-blocks-chip-2014-11-10
├── smart_badsectors-2014-11-10
├── smart_powercycles-2014-11-10
└── smart_poweron-2014-11-10
$ cat zoro.exoscale.ch/smart-sda/smart_attribute-total-lbas-read-2014-11-10
epoch,current,worst,threshold,pretty
1415613266.376,100.000000,100.000000,0.000000,281018.000000
1415613276.395,100.000000,100.000000,0.000000,281018.000000
1415613286.384,100.000000,100.000000,0.000000,281051.000000
1415613296.383,100.000000,100.000000,0.000000,281051.000000
This plugin uses libatasmart:
http://0pointer.de/blog/projects/being-smart.html
As libatasmart is Linux-only, the plugin is therefore Linux-only
too. The disks are discovered through libudev.
Each SMART attribute is extracted. The current value, worst value,
threshold value (if any) are recorded. Those are normalized
values (between 0 and 255, higher is better). For some values, it makes
more sense to record the raw value. libatasmart is converting this raw
value to something sensible. We record that form. Sometimes, this is
just the raw value but sometimes this is converted to another scale (for
example, the temperature). People should know what each attribute means
before using those values. Otherwise, the normalized values are better.
Four values are (power-on time, power cycle count, bad sectors and
temperature) are also recorded on their own. Those are usually the
values that the user care about the most.
Here is an excerpt of the plugin output with the CSV plugin (the SSD
disk on my laptop doesn't provide a temperature sensor):
.
└── zoro.exoscale.ch
└── smart-sda
├── smart_attribute-attribute-173-2014-11-10
├── smart_attribute-attribute-174-2014-11-10
├── smart_attribute-available-reserved-space-2014-11-10
├── smart_attribute-end-to-end-error-2014-11-10
├── smart_attribute-erase-fail-count-2014-11-10
├── smart_attribute-power-cycle-count-2014-11-10
├── smart_attribute-power-on-hours-2014-11-10
├── smart_attribute-power-on-seconds-2-2014-11-10
├── smart_attribute-program-fail-count-2014-11-10
├── smart_attribute-reallocated-sector-count-2014-11-10
├── smart_attribute-reported-uncorrect-2014-11-10
├── smart_attribute-total-lbas-read-2014-11-10
├── smart_attribute-total-lbas-written-2014-11-10
├── smart_attribute-udma-crc-error-count-2014-11-10
├── smart_attribute-unused-reserved-blocks-2014-11-10
├── smart_attribute-used-reserved-blocks-chip-2014-11-10
├── smart_badsectors-2014-11-10
├── smart_powercycles-2014-11-10
└── smart_poweron-2014-11-10
$ cat zoro.exoscale.ch/smart-sda/smart_attribute-total-lbas-read-2014-11-10
epoch,current,worst,threshold,pretty
1415613266.376,100.000000,100.000000,0.000000,281018.000000
1415613276.395,100.000000,100.000000,0.000000,281018.000000
1415613286.384,100.000000,100.000000,0.000000,281051.000000
1415613296.383,100.000000,100.000000,0.000000,281051.000000
write_redis: fix format of commands sent to redis
The commands getting submitted to redis now look like this:
"ZADD" "collectd/hostname/entropy/entropy" "1415602051.335973024" "1415602051.335973024:823"
"SADD" "collectd/values" "hostname/entropy/entropy"
... which is the same as in the initial implementation, except for the
added decimals in the timestamp (the plugin was developped before
high-precision timestamps support was added to collectd).
The commands getting submitted to redis now look like this:
"ZADD" "collectd/hostname/entropy/entropy" "1415602051.335973024" "1415602051.335973024:823"
"SADD" "collectd/values" "hostname/entropy/entropy"
... which is the same as in the initial implementation, except for the
added decimals in the timestamp (the plugin was developped before
high-precision timestamps support was added to collectd).
redis: add missing constant
write_redis: re-add colon dropped in b7984797
When running f3706b0b87, the following command gets sent to redis:
"ZADD" "collectd/hostname/entropy/entropy" "1415487432.000000" "1415487432:932"
Meaning the value actually stored, and later returned by redis is:
"<timstamp>:<value>".
b7984797 accidentally dropped the comma separating the timestamp and the
value, which leads the plugin to store a somewhat confusing value in
redis:
"ZADD" "collectd/hostname/entropy/entropy" "1415487432.000000" "1415487432932"
When running f3706b0b87, the following command gets sent to redis:
"ZADD" "collectd/hostname/entropy/entropy" "1415487432.000000" "1415487432:932"
Meaning the value actually stored, and later returned by redis is:
"<timstamp>:<value>".
b7984797 accidentally dropped the comma separating the timestamp and the
value, which leads the plugin to store a somewhat confusing value in
redis:
"ZADD" "collectd/hostname/entropy/entropy" "1415487432.000000" "1415487432932"
Fix memory leak in redis.c
Add memory_lua data type
Set the right order to parse the redis info.
remove all credis left. Migrate write_redis too.
Conflicts:
README
Conflicts:
README
Revert types for redis.c plugin.
use timeval. keep timeout in milliseconds for backwards compatibility.
Switch redis.c plugin from credis to hiredis.
Change the entire redis.c plugin to use libhiredis (tested with
libhiredis0.10) instead on credis. The libhiredis is supported in a number
of distributions like Debian or Ubuntu.
This patch keeps the same functionality that the old redis.c does.
Conflicts:
src/redis.c
src/types.db
Change the entire redis.c plugin to use libhiredis (tested with
libhiredis0.10) instead on credis. The libhiredis is supported in a number
of distributions like Debian or Ubuntu.
This patch keeps the same functionality that the old redis.c does.
Conflicts:
src/redis.c
src/types.db
ignore new dirs
Define _DEFAULT_SOURCE in addition to _BSD_SOURCE
This enables forward compatibility with the ongoing
deprecation of _BSD_SOURCE.
This enables forward compatibility with the ongoing
deprecation of _BSD_SOURCE.
Merge pull request #794 from vincentbernat/fix/kafka-warning
kafka: fix compilation for older versions of librdkafka
kafka: fix compilation for older versions of librdkafka
kafka: fix compilation for older versions of librdkafka
Since commit f505691270f2317291c372fd5f004a4ffbce9f9a, kafka module was
broken. Enable definition of `kafka_log()` when using kafka logger
callback as well.
Since commit f505691270f2317291c372fd5f004a4ffbce9f9a, kafka module was
broken. Enable definition of `kafka_log()` when using kafka logger
callback as well.
network: don't enable gcrypt thread callbacks when gcrypt recent enough
From `gcrypt.h`:
> NOTE: Since Libgcrypt 1.6 the thread callbacks are not anymore used.
> However we keep it to allow for some source code compatibility if used
> in the standard way.
Otherwise, we get a deprecation warning which is turned into an error:
```
CC libcollectdclient_la-network_buffer.lo
../../../src/libcollectdclient/network_buffer.c:58:15: error: 'gcry_thread_cbs' is deprecated (declared at /usr/include/gcrypt.h:213) [-Werror=deprecated-declarations]
GCRY_THREAD_OPTION_PTHREAD_IMPL;
```
Fixes: #632
From `gcrypt.h`:
> NOTE: Since Libgcrypt 1.6 the thread callbacks are not anymore used.
> However we keep it to allow for some source code compatibility if used
> in the standard way.
Otherwise, we get a deprecation warning which is turned into an error:
```
CC libcollectdclient_la-network_buffer.lo
../../../src/libcollectdclient/network_buffer.c:58:15: error: 'gcry_thread_cbs' is deprecated (declared at /usr/include/gcrypt.h:213) [-Werror=deprecated-declarations]
GCRY_THREAD_OPTION_PTHREAD_IMPL;
```
Fixes: #632
build: fix out-of-tree build
When building collectd out of tree, `srcdir` and `builddir` are
different. We ask to search path in `$(top_srcdir)/src` since this is
needed to find `liboconfig/config.h`. Also fix search path for
libcollectdclient where only one header is in `builddir` while the
remaining are in `srcdir`.
When building collectd out of tree, `srcdir` and `builddir` are
different. We ask to search path in `$(top_srcdir)/src` since this is
needed to find `liboconfig/config.h`. Also fix search path for
libcollectdclient where only one header is in `builddir` while the
remaining are in `srcdir`.
Merge pull request #791 from mfournier/fix-lvm-merge-conflict-mistake
lvm: remove duplicate call to lvm_submit()
lvm: remove duplicate call to lvm_submit()
lvm: remove duplicate call to lvm_submit()
This got added by accident when solving the merge conflict in 103f05e0.
It led to the plugin triggering the classical "uc_update: Value too old"
error message.
This got added by accident when solving the merge conflict in 103f05e0.
It led to the plugin triggering the classical "uc_update: Value too old"
error message.
Merge pull request #772 from mschenck/write_tsdb-type-type_instance-differentiate
Including vl->type, even when vl->type_instance is available to avoid ov...
Including vl->type, even when vl->type_instance is available to avoid ov...
Merge pull request #778 from landryb/openbsd_fix_processes_plugin
Fix the processes plugin on OpenBSD (#776)
Fix the processes plugin on OpenBSD (#776)
Merge pull request #779 from landryb/openbsd_build_fixes
Openbsd build fixes
Openbsd build fixes
varnish: add some detail & references to manpage
varnish: manpage & example cleanup
Consitently alphabetically ordered options and ensured all version
limitations are mentioned.
Consitently alphabetically ordered options and ensured all version
limitations are mentioned.
varnish: correct an erroneous type
Affects Varnish 3 only.
Affects Varnish 3 only.
varnish: monitor VSM usage stats, made available in V4
varnish: monitor 2 purge-relatad metrics added in V4
varnish: add a couple of metrics available both in V3 & V4
varnish: adapt to metrics renames & deprecations in varnish4
Summary of changes:
- connections: "accepted" & "dropped" are now found in the session section
- dirdns: doesn't exist in varnish 4 anymore
- objects: "n_obj*" were removed from varnish 4
- ban: "obj", "req" & "completed" bans were added. As a lot of new
"tested" metrics are available, so we only collectd the overall total
now
- session: metrics from "connections" and "threads" categories were
moved here
- struct: "n_sess*" were removed
- totals: new, more detailed *bytes metrics were made available
- threads: new metrics were made available. "queued" was moved the
session section.
Summary of changes:
- connections: "accepted" & "dropped" are now found in the session section
- dirdns: doesn't exist in varnish 4 anymore
- objects: "n_obj*" were removed from varnish 4
- ban: "obj", "req" & "completed" bans were added. As a lot of new
"tested" metrics are available, so we only collectd the overall total
now
- session: metrics from "connections" and "threads" categories were
moved here
- struct: "n_sess*" were removed
- totals: new, more detailed *bytes metrics were made available
- threads: new metrics were made available. "queued" was moved the
session section.
varnish: bare minimum changes to support varnish4
configure: add varnish4 presence detection
Recent NetBSD versions also use a TAILQ.
10 years agoUse cpu_stage() where expected in the CAN_USE_SYSCTL, HAVE_LIBSTATGRAB and HAVE_SYSCT...
Use cpu_stage() where expected in the CAN_USE_SYSCTL, HAVE_LIBSTATGRAB and HAVE_SYSCTLBYNAME codepaths.
cpu_state() isnt a function.
cpu_state() isnt a function.
inpt_queue is a TAILQ on OpenBSD
Link collectd-tg with -lpthread if available
otherwise linking fails with undefined refs to pthread_mutex_* functions
otherwise linking fails with undefined refs to pthread_mutex_* functions
Fix swapctl() argument count detection on OpenBSD.
on OpenBSD swapctl() takes three arguments, but is defined in unistd.h
and also needs sys/param.h.
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man2/swapctl.2
we need to add those headers to both detections to make sure the test
on 'swapctl takes two arguments' correctly fails.
on OpenBSD swapctl() takes three arguments, but is defined in unistd.h
and also needs sys/param.h.
http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man2/swapctl.2
we need to add those headers to both detections to make sure the test
on 'swapctl takes two arguments' correctly fails.
Include pthread.h in plugin.h to get pthread_t definition.
Fix the processes plugin on OpenBSD (#776)
Properly check for struct kinfo_proc members in configure.ac
Use kvm_getprocs() like it was done on FreeBSD
Properly check for struct kinfo_proc members in configure.ac
Use kvm_getprocs() like it was done on FreeBSD
Merge remote-tracking branch 'origin/pr/488'
Let the config parser accept unquoted IPv6 addresses.
The parser supports raw IPv6 addresses, optional address and port (as
[<addr>]:<port>), and embedded IPv4 addresses.
Based on "Common Patterns" found in the flex manual.
The parser supports raw IPv6 addresses, optional address and port (as
[<addr>]:<port>), and embedded IPv4 addresses.
Based on "Common Patterns" found in the flex manual.
Merge pull request #774 from trenkel/master
Adding get_dataset() to python
Adding get_dataset() to python
python: Add get_dataset() to the man page.
Add get_dataset() as a way to get the definition of a dataset from python.
https://github.com/collectd/collectd/issues/771
https://github.com/collectd/collectd/issues/771
Including vl->type, even when vl->type_instance is available to avoid over-writing values (i.e. with the 'df' plugin)
mysql: correct 2 data types in innodb counters
Thanks to @ekilby for spotting this mistake ! Fixes #757
Thanks to @ekilby for spotting this mistake ! Fixes #757
missed the output_name
Added support for the last two additional columns in add-linux-io-time