[hobbit] More Granular data than 300 second samples, duh!

Scott Walters scott at PacketPushers.com
Sun Feb 3 05:27:40 CET 2008


A wee bit of lag on this response.

On Oct 15, 2007 6:38 AM, Henrik Stoerner <henrik at hswn.dk> wrote:

>
> I haven't had a lot of requests for more granular data to begin with;
> most of the requests have been for the fine-grained (5-minute) data to
> be maintained for a longer period of time than the current 48 hours.
> In the next version (or the current snapshot), you can define RRA's
> individually for each type of RRD files. So you can configure the vmstat
> RRD's to maintain the fine-grained data for a longer time. That should
> take care of this issue.


Adding the ability to define RRA's for each RRD is a very nice feature.  I
definitely believe the the stock RRA's should be adjusted to meet user
requests. As I've mentioned before, the original RRA definitions were very
arbitrary.

Thinking out loud:  I believe we could redefine the stock RRAs for each RRD
that could handle the need to keep data longer, and the ability to keep more
granular forms of data.  This *should* be a simple change that is completely
backwards compatible and would not mandate a more granular sample rate.
But, by doing so, Hobbit could from this point forward, would allow for
both, without RRA/RRD manipulations.  "Migrating old RRDs" would then be an
independent task.

For example, create a new RRA structure which allows for (rough draft):

5 second samples for 48 hours (the granular RRA we don't have today)
1 hour samples for 400 days (this gives the ability to run business
"reports" over at least the last year)
24 hour samples for 9600 days (24*400=9600 I think I understand what the Y2K
bug creators went through, I can't imagine this code running 9600 days from
now!  But then how many times I've missed data flowing off the 576 day
chart!)


>
> If I understand your suggestion correctly, you would change the client
> to run "vmstat 5 61" (for instance), collect all 60 samples, and then
> send them off to Hobbit every 5 minutes. So we would essentially be
> caching data for 5 minutes on the client, then send it off to the Hobbit
> server and do a single multi-update of the RRD data when it arrives.


Exactly.  I don't know if rrdtool has the ability to handle "batch" inputs,
but it would be nice for this.  Since file IO is such an issue anyway, I'd
hate to aggravate the condition.


> way off. I guess this could be done by having the client timestamp the
> data, but then use these as relative timestamps (so we can see sample 10
> was done 236 seconds before the last sample) and then work out the exact
> timestamps over on the Hobbit server, like we do today.
>

I'd say keep the time "interpretation" exactly the same as before since it
has worked.  The increase of samples are merely offsets from the current
single input.  I don't see any reason to change the logic.


> This could be done - it would require a bit of change to the clients,
> but I'm not really happy with the current way the vmstat data collection
> works (it usually leaves a vmstat process hanging around when the client
> is stopped), so I wouldn't mind having to do some code for this. I'd
> probably write a small tool to run "vmstat 60" so it runs forever, and
> then the tool would pick up the data, timestamp it and then regularly
> feed it into the client report.


Heh, it's annoying how such an ugly shell hack can work so well.  I think
going to a "vmstat 60" running all the time would only move where the
ugliness happens (from the collector, to the parser--plus you'd still have
to keep track of the PID, and kill it on hobbit shutdowns).  The only
"clean" way I can think of is to make the collectors run once per sample:
e.g. "vmstat 5 2".  We are also bumping into statistical problems because
vmstat info is a "rate" (something/second) and other data is a gauge
(something).  I think it's a scalar vs. vector issue. RRD can of course deal
with input streams of GAUGE or COUNTER (DERIVE was used as a "poor-mans"
solution to kill spikes), but not all metrics are available via COUNTERS (
e.g. load average).  But you can get COUNTER for system calls).

The currently collection of the metrics might be ugly but it works.  I'd be
really amazed if cleaner way could be developed.  The *stat commands give a
nice abstraction layer to kernel metrics, that I think overall would be a
nightmare to normalize across platforms.


> And of course the server-end would need changing to accomodate the new
> data format and the multiple updates.  It's certainly doable, without a
> whole lot of re-designing.
>
> But I think we should consider which datasets one might want to have
> these frequent updates for. vmstat is obvious; but what about memory
> utilisation? Disk utilisation rarely changes rapidly - or perhaps it
> does ? Process counts? Network test response times ? Once we start doing
> it for vmstat, I'd expect everyone to come forward and ask for it for
> lots of other datasets - so instead of doing a quick hack just for
> vmstat, we should consider what would be the "right" way of doing it for
> all/most of the data.


This is a symptom of one of the bigger issues I ran into with larrd
development:  the chicken and egg dilemma regarding collecting, parsing, and
reporting metrics.  You need to define all three, but you don't necessarily
know how you'd like to see data until you see it, which of course affects
how you collect it.

Over the years, I've been drawn towards the "industrial strength" "one size
fits all" architectures vs. "super custom elite" configurations. Curtis
Preston in his backup book, has a saying, "special is bad."  I agree whole
heartedly.

This means despite the fact I can't think of one good reason why disk usage
should be kept at five second intervals, customizing each RRD for the data
would be a pain.  If the heartbeats were general enough, we could define a
"stock RRA structure" that could handle data inputted at 5 or 300 second
intervals.

Since you've already got the code to handle custom RRA configurations for
each RRD this may be a moot point.

> $ vmstat 1 301 would definitely be a bad idea.
>
> Agreed - but I don't think that should be something Hobbit decides. I
> can easily imagine a scenario where you would do that for some
> troubleshooting situation, and if that is what is needed then Hobbit
> should let you do it. No reason to setup arbitrary restrictions.
> (This is in line with Unix thinking - "if you insist on shooting your
> foot off, it's your decision to do so". Just as "rm -rf /" is not
> recommended, but still possible).


As long as the stock configuration is not brain dead, I won't loose sleep
over giving administrators enough rope to hang themselves.  Although
generally speaking, I prefer systems that protect users against themselves.

So the long and short of all of this, is the request that with the
4.3.0release the standard RRAs within RRDs being created could handle
the
requirement of 5 second samples, 1 hour samples kept for at least one year,
and "keep one day samples longer than you think you could possibly need
them."

-Scott
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.xymon.com/pipermail/xymon/attachments/20080202/9be743e4/attachment.html>


More information about the Xymon mailing list