[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [hobbit] Re: hobbit_rrd stops working after about 1 hour



Naeem.Maqsud wrote:
>hobbitd_rrd process was showing to be in "uninterruptible sleep" state
>most of the time with high iowait associated with the CPU it was running 
>on.  I suspected that the problem may be due to disk IO while updating 
>rrds for the 2000 hosts.  I created a tmpfs filesystem and copied the 
>rrd directory into it.

You might want to look at your disk hardware and the software setup. 
I have 2000 hosts myself, with a total of just over 18000 RRD-files that
are updated every 5 minutes. vmstat tells me this system spends about
15% of its time in I/O wait.

This is a Debian/Linux system on Sun hardware - two SCSI disks in a
raid-1 config (Linux software raid mirror) with a reiserfs filesystem.


On Mon, Aug 22, 2005 at 10:30:52PM +0200, Olivier Beau wrote:
> i have the feeling that hobbitd_rrd could cause performance issue for  
> large site and may not be fully optimized... henrik ?

It might become a problem - I agree with that. 

The solutions are probably going to be those that you would with any
kind of application that has a high I/O load. E.g. mail- and
news-servers face similar problems. So your choice of filesystem
and mount-options become important.

For Linux systems you'd definitely want to use one of the better
performing filesystems, e.g. Reiserfs or JFS. ext2/3 - in my experience,
there are tons of benchmarks pointing in whatever direction you like - 
is slower. Using the "noatime,nodiratime" mount options is also
recommended, as is the reiserfs "notail" option.

For Solaris ufs filesystems, I am told the "journal" option will boost
performance significantly, although I have never tried it myself.


Since all of the hobbitd_rrd disk activity is done by the rrdtool
library there's not a whole lot the Hobbit can do to boost throughput - 
at least not as long as we stick to with rrdtool as the back-end for
graphs. And I have no intention of changing that.


Regards,
Henrik