[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [hobbit] Graph/Devmon Help Needed
- To: hobbit (at) hswn.dk
- Subject: Re: [hobbit] Graph/Devmon Help Needed
- From: Brian Daly <brian.daly (at) criticalpath.net>
- Date: Thu, 08 Jan 2009 10:25:00 +0000
- References: <495B967B.60308 (at) makelofine.org> <20090107133115.GB24945 (at) osiris.hswn.dk> <4964B147.4040206 (at) criticalpath.net> <200901071623.19845.bgmilne (at) staff.telkomsa.net>
- User-agent: Thunderbird 2.0.0.19 (Windows/20081209)
Buchan Milne wrote:
On Wednesday 07 January 2009 15:42:31 Brian Daly wrote:
Hi,
I have configured hobbit to monitor cpu and disk space on a cisco call
manager device using devmon/snmp.
I assume you created a new template for this device? I can help more if you
post the template (more appropriate for the devmon list though). Also, what OS
does it run ? Windows server ?
(while we have Cisco call managers in the company, I don't monitor or have any
access to them, but I've been meaning to work on templates for Windows servers
for devmon, and that I can test myself)
Also, I would prefer to let everyone benefit from investments users make in
creating templates, so once it is working, please consider sending it to me to
include (or, file a bug on the devmon SF tracker) and attach the templates to
that.
The CPU test returns the values for each processors load as a percentage
(INTEGER) and hobbit displays these values, however no graph is created
automatically.
Most of the current cisco templates shipped with devmon should result in a
working CPU graph, I have them for cisco-6509, cisco-7207, cisco-asa etc.
E.g.:
http://devmon.svn.sourceforge.net/viewvc/devmon/trunk/templates/cisco-6509/cpu/message?revision=28&view=markup
Creating a custom graph is my last option, but I would
hope that like other cisco devices that a graph could be created
automatically using the values returned by devmon.
For Disk Usage I have worked out the percentage used with some simple
MATH in the transforms file and this is also being displayed properly on
the disk page for this device, however the graph is not displaying these
values correctly. Before I started using the transforms to get the
percentage of disk space used, devmon was configured to simply get the
number of bytes used on one volume. This created a graph (although it
was constantly at 0%). After tidying up the test to list all four
volumes and the percentage of disk space used, the graph is no longer
reporting any values.
The linux-openwrt template works nicely (the formatting isn't quite the same
as sent by the hobbit client, but graphing works fine) for me on disks on
linux (on WRT54GL, and normal linux host) monitored via snmp:
http://devmon.svn.sourceforge.net/viewvc/devmon/trunk/templates/linux-
openwrt/disk/message?revision=38&view=markup
(the "plain" option for TABLE allows one to try and get something closer to
what the Hobbit rrd modules expect).
Can somebody help me get the graph to display the percentage of disk
space used for the 4 volumes I am monitoring and to start graphing the
percentage of CPU usage for 2 CPU's.
If you can't come right with the examples above, it would help to see some of
the data you get via SNMP, and your existing template (at least the 'message'
files).
Note, these are the two devmon tests that can create graphs without the devmon
collector for Hobbit.
Regards,
Buchan
To unsubscribe from the hobbit list, send an e-mail to
hobbit-unsubscribe (at) hswn.dk
>>> I created a new template for this device. A Cisco-7825. The latest
Call Manager software runs a Linux based OS, but stripped down (so you
cannot install hobbit client or anything). However the hardware is
actually HP not cisco so the usual Cisco MIB's do not work for CPU load
average etc I have to use host resources MIB such as found on
http://www.oidview.com/mibs/0/HOST-RESOURCES-V2-MIB.html. When I get
this working I will post the template. I am fairly new to hobbit so
apologies if I am leaving out any obvious information. Here is the disk
portion of the template I have just tried to but disk test has now gone
purple and is no longer reporting in. Attached is the template that does
work without the graph.
transforms -
DiskSize = {hrStorageAllocationUnits} * {hrStorageSize} / 1024
DiskBlocks = {hrStorageAllocationUnits}
DiskBlockSize = {hrStorageSize}
DiskSizeUsed = {hrStorageUsed} * {hrStorageSize} / 1024
DiskAvail = {hrStorageSize} - {hrStorageUsed}
DiskPerUse = {DiskSizeUsed} * 100 / {DiskSize}
Thresholds -
DiskPerUse : yellow : 90 : Disk utilization is high
DiskPerUse : red : 95 : Disk utilization is critical
messages -
TABLE:plain,noalarmsmsg
Filesystem 1024-blocks Used Available Capacity Mounted on
{hrStorageDescr} {DiskSize} {DiskSizeUsed} {DiskAvail} {DiskPerUse}%
{hrStorageDescr} {DiskPerUse.color}
oids - (tried these as both leaf and branch)
hrStorageDescr : 1.3.6.1.2.1.25.2.3.1.3 : leaf
hrStorageSize 1.3.6.1.2.1.25.2.3.1.5 : leaf
hrStorageUsed 1.3.6.1.2.1.25.2.3.1.6 : leaf
hrStorageAllocationUnits 1.3.6.1.2.1.25.2.3.1.4 : leaf
the oid 1.3.6.1.2.1.25.2.3.1 gives the following values -
HOST-RESOURCES-MIB::hrStorageIndex.1 = INTEGER: 1
HOST-RESOURCES-MIB::hrStorageIndex.2 = INTEGER: 2
HOST-RESOURCES-MIB::hrStorageIndex.3 = INTEGER: 3
HOST-RESOURCES-MIB::hrStorageIndex.4 = INTEGER: 4
HOST-RESOURCES-MIB::hrStorageIndex.5 = INTEGER: 5
HOST-RESOURCES-MIB::hrStorageIndex.6 = INTEGER: 6
HOST-RESOURCES-MIB::hrStorageIndex.7 = INTEGER: 7
HOST-RESOURCES-MIB::hrStorageIndex.8 = INTEGER: 8
HOST-RESOURCES-MIB::hrStorageIndex.9 = INTEGER: 9
HOST-RESOURCES-MIB::hrStorageIndex.10 = INTEGER: 10
HOST-RESOURCES-MIB::hrStorageType.1 = OID:
HOST-RESOURCES-TYPES::hrStorageRam
HOST-RESOURCES-MIB::hrStorageType.2 = OID:
HOST-RESOURCES-TYPES::hrStorageVirtualMemory
HOST-RESOURCES-MIB::hrStorageType.3 = OID:
HOST-RESOURCES-TYPES::hrStorageFixedDisk
HOST-RESOURCES-MIB::hrStorageType.4 = OID:
HOST-RESOURCES-TYPES::hrStorageOther
HOST-RESOURCES-MIB::hrStorageType.5 = OID:
HOST-RESOURCES-TYPES::hrStorageOther
HOST-RESOURCES-MIB::hrStorageType.6 = OID:
HOST-RESOURCES-TYPES::hrStorageOther
HOST-RESOURCES-MIB::hrStorageType.7 = OID:
HOST-RESOURCES-TYPES::hrStorageFixedDisk
HOST-RESOURCES-MIB::hrStorageType.8 = OID:
HOST-RESOURCES-TYPES::hrStorageFixedDisk
HOST-RESOURCES-MIB::hrStorageType.9 = OID:
HOST-RESOURCES-TYPES::hrStorageFixedDisk
HOST-RESOURCES-MIB::hrStorageType.10 = OID:
HOST-RESOURCES-TYPES::hrStorageOther
HOST-RESOURCES-MIB::hrStorageDescr.1 = STRING: Physical RAM
HOST-RESOURCES-MIB::hrStorageDescr.2 = STRING: Virtual Memory
HOST-RESOURCES-MIB::hrStorageDescr.3 = STRING: /
HOST-RESOURCES-MIB::hrStorageDescr.4 = STRING: /proc
HOST-RESOURCES-MIB::hrStorageDescr.5 = STRING: /dev/pts
HOST-RESOURCES-MIB::hrStorageDescr.6 = STRING: /proc/bus/usb
HOST-RESOURCES-MIB::hrStorageDescr.7 = STRING: /partB
HOST-RESOURCES-MIB::hrStorageDescr.8 = STRING: /common
HOST-RESOURCES-MIB::hrStorageDescr.9 = STRING: /grub
HOST-RESOURCES-MIB::hrStorageDescr.10 = STRING: /dev/shm
HOST-RESOURCES-MIB::hrStorageAllocationUnits.1 = INTEGER: 4096 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.2 = INTEGER: 4096 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.3 = INTEGER: 4096 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.4 = INTEGER: 1024 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.5 = INTEGER: 1024 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.6 = INTEGER: 1024 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.7 = INTEGER: 4096 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.8 = INTEGER: 4096 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.9 = INTEGER: 1024 Bytes
HOST-RESOURCES-MIB::hrStorageAllocationUnits.10 = INTEGER: 4096 Bytes
HOST-RESOURCES-MIB::hrStorageSize.1 = INTEGER: 513865
HOST-RESOURCES-MIB::hrStorageSize.2 = INTEGER: 512062
HOST-RESOURCES-MIB::hrStorageSize.3 = INTEGER: 3079486
HOST-RESOURCES-MIB::hrStorageSize.4 = INTEGER: 0
HOST-RESOURCES-MIB::hrStorageSize.5 = INTEGER: 0
HOST-RESOURCES-MIB::hrStorageSize.6 = INTEGER: 0
HOST-RESOURCES-MIB::hrStorageSize.7 = INTEGER: 3079478
HOST-RESOURCES-MIB::hrStorageSize.8 = INTEGER: 38458713
HOST-RESOURCES-MIB::hrStorageSize.9 = INTEGER: 256665
HOST-RESOURCES-MIB::hrStorageSize.10 = INTEGER: 256932
HOST-RESOURCES-MIB::hrStorageUsed.1 = INTEGER: 262387
HOST-RESOURCES-MIB::hrStorageUsed.2 = INTEGER: 210653
HOST-RESOURCES-MIB::hrStorageUsed.3 = INTEGER: 2740423
HOST-RESOURCES-MIB::hrStorageUsed.4 = INTEGER: 0
HOST-RESOURCES-MIB::hrStorageUsed.5 = INTEGER: 0
HOST-RESOURCES-MIB::hrStorageUsed.6 = INTEGER: 0
HOST-RESOURCES-MIB::hrStorageUsed.7 = INTEGER: 2750063
HOST-RESOURCES-MIB::hrStorageUsed.8 = INTEGER: 7848248
HOST-RESOURCES-MIB::hrStorageUsed.9 = INTEGER: 8415
HOST-RESOURCES-MIB::hrStorageUsed.10 = INTEGER: 12216
HOST-RESOURCES-MIB::hrStorageAllocationFailures.1 = Counter32: 0
HOST-RESOURCES-MIB::hrStorageAllocationFailures.2 = Counter32: 0
HOST-RESOURCES-MIB::hrStorageAllocationFailures.3 = Counter32: 0
HOST-RESOURCES-MIB::hrStorageAllocationFailures.4 = Counter32: 0
HOST-RESOURCES-MIB::hrStorageAllocationFailures.5 = Counter32: 0
HOST-RESOURCES-MIB::hrStorageAllocationFailures.6 = Counter32: 0
HOST-RESOURCES-MIB::hrStorageAllocationFailures.7 = Counter32: 0
HOST-RESOURCES-MIB::hrStorageAllocationFailures.8 = Counter32: 0
HOST-RESOURCES-MIB::hrStorageAllocationFailures.9 = Counter32: 0
HOST-RESOURCES-MIB::hrStorageAllocationFailures.10 = Counter32: 0
message -
{DiskUsage3.color} Volume {hrStorageDescr3} {DiskUsage3}% full
{DiskUsage7.color} Volume {hrStorageDescr7} {DiskUsage7}% full
{DiskUsage8.color} Volume {hrStorageDescr8} {DiskUsage8}% full
{DiskUsage9.color} Volume {hrStorageDescr9} {DiskUsage9}% full
transforms -
DiskUsage3percent : MATH : {hrStorageSize3} / 100
DiskUsage3 : MATH : {hrStorageUsed3} / {DiskUsage3percent}
DiskUsage7percent : MATH : {hrStorageSize7} / 100
DiskUsage7 : MATH : {hrStorageUsed7} / {DiskUsage7percent}
DiskUsage8percent : MATH : {hrStorageSize8} / 100
DiskUsage8 : MATH : {hrStorageUsed8} / {DiskUsage8percent}
DiskUsage9percent : MATH : {hrStorageSize9} / 100
DiskUsage9 : MATH : {hrStorageUsed9} / {DiskUsage9percent}
thresholds -
DiskUsage3 : red : >91 : Disk utilisation for {hrStorageDescr3} is critical: {DiskUsage3}%
DiskUsage3 : yellow : >90 : Disk utilisation for {hrStorageDescr3} is high: {DiskUsage3}%
DiskUsage3 : green : : Disk utilisation for {hrStorageDescr3} is nominal: {DiskUsage3}%
DiskUsage7 : red : >91 : Disk utilisation for {hrStorageDescr7} is critical: {DiskUsage7}%
DiskUsage7 : yellow : >90 : Disk utilisation for {hrStorageDescr7} is high: {DiskUsage7}%
DiskUsage7 : green : : Disk utilisation for {hrStorageDescr7} is nominal: {DiskUsage7}%
DiskUsage8 : red : >91 : Disk utilisation for {hrStorageDescr8} is critical: {DiskUsage8}%
DiskUsage8 : yellow : >90 : Disk utilisation for {hrStorageDescr8} is high: {DiskUsage8}%
DiskUsage8 : green : : Disk utilisation for {hrStorageDescr8} is nominal: {DiskUsage8}%
DiskUsage9 : red : >91 : Disk utilisation for {hrStorageDescr9} is critical: {DiskUsage9}%
DiskUsage9 : yellow : >90 : Disk utilisation for {hrStorageDescr9} is high: {DiskUsage9}%
DiskUsage9 : green : : Disk utilisation for {hrStorageDescr9} is nominal: {DiskUsage9}%
oids -
hrStorageDescr3 : 1.3.6.1.2.1.25.2.3.1.3.3 : leaf
hrStorageSize3 : 1.3.6.1.2.1.25.2.3.1.5.3 : leaf
hrStorageUsed3 : 1.3.6.1.2.1.25.2.3.1.6.3 : leaf
hrStorageDescr7 : 1.3.6.1.2.1.25.2.3.1.3.7 : leaf
hrStorageSize7 : 1.3.6.1.2.1.25.2.3.1.5.7 : leaf
hrStorageUsed7 : 1.3.6.1.2.1.25.2.3.1.6.7 : leaf
hrStorageDescr8 : 1.3.6.1.2.1.25.2.3.1.3.8 : leaf
hrStorageSize8 : 1.3.6.1.2.1.25.2.3.1.5.8 : leaf
hrStorageUsed8 : 1.3.6.1.2.1.25.2.3.1.6.8 : leaf
hrStorageDescr9 : 1.3.6.1.2.1.25.2.3.1.3.9 : leaf
hrStorageSize9 : 1.3.6.1.2.1.25.2.3.1.5.9 : leaf
hrStorageUsed9 : 1.3.6.1.2.1.25.2.3.1.6.9 : leaf