[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [hobbit] More Granular data than 300 second samples, duh!
- To: hobbit (at) hswn.dk
 
- Subject: Re: [hobbit] More Granular data than 300 second samples, duh!
 
- From: Stef Coene <stef.coene (at) docum.org>
 
- Date: Mon, 15 Oct 2007 09:37:53 +0200
 
- References: <dadd3a320710141919t65818260h1d5bf3d3e7485874 (at) mail.gmail.com> <200710150847.15397.stef.coene (at) docum.org>
 
- User-agent: KMail/1.9.6
 
On Monday 15 October 2007, Stef Coene wrote:
> On Monday 15 October 2007, Scott Walters wrote:
> > One of the most common requests to the trending of data is "How do I
> > make the charts graph data samples which are smaller than 300
> > seconds?"  And the answer has been, you have the source, have fun.
> >
> > The original design decision that Henrik inherited was larrd should
> > only be for capacity planning and NOT real-time performance analysis.
> > Do one thing and do it well.
> >
> > I had a thought the other day, and I think we could possibly get the
> > "best of both worlds."
I once had the plan (but not the time) to change how to info is collected.  
For vmstat, you can run
vmstat 1 300
Process the data on the client, find the max, min and average and report the 
data to the hobbit server.  So you still have the 5 minute updates (the 
average is the same number as what is reported now), but you also have the 
maximum and minimum from the 5 minutes.
Stef