[hobbit] Re: netapp.pl & xymon 4.2.3 & (no) disk graphs
Peter Welter
peter.welter at gmail.com
Tue Apr 7 15:38:20 CEST 2009
Thank you very much for the quick reply!
I can hardly wait for the next release of the client.
regards
Peter
2009/4/7 Francesco Duranti <fduranti at q8.it>:
> Change the FNPATTERN to:
> FNPATTERN ^disk(.*).rrd
> This will take only the file "starting" with disk instead of getting file containing the work disk (that will include also xstatdisk rrd file.
>
> To get the xstat statistics you need to put those lines in the hobbitgraph.cfg
>
> [xstatdisk]
> FNPATTERN ^xstatdisk,(.+).rrd
> TITLE Disk busy %
> YAXIS %busy
> -r
> -u 100
> DEF:busy at RRDIDX@=@RRDFN@:disk_busy:AVERAGE
> LINE1:busy at RRDIDX@#@COLOR@:@RRDPARAM@ Disk busy %
> GPRINT:busy at RRDIDX@:LAST: \: %5.1lf (cur)
> GPRINT:busy at RRDIDX@:MAX: \: %5.1lf (max)
> GPRINT:busy at RRDIDX@:MIN: \: %5.1lf (min)
> GPRINT:busy at RRDIDX@:AVERAGE: \: %5.1lf (avg)\n
> [xstatqtree]
> FNPATTERN ^xstatqtree,(.+).rrd
> TITLE Qtree Operations (Total)
> YAXIS ops/second
> DEF:nfs_ops at RRDIDX@=@RRDFN@:nfs_ops:AVERAGE
> DEF:cifs_ops at RRDIDX@=@RRDFN@:cifs_ops:AVERAGE
> CDEF:tot_ops at RRDIDX@=nfs_ops at RRDIDX@,cifs_ops at RRDIDX@,+
> LINE2:tot_ops at RRDIDX@#@COLOR@:@RRDPARAM@ ops
> GPRINT:tot_ops at RRDIDX@:LAST: \: %8.1lf%s (cur)
> GPRINT:tot_ops at RRDIDX@:MAX: \: %8.1lf%s (max)
> GPRINT:tot_ops at RRDIDX@:MIN: \: %8.1lf%s (min)
> GPRINT:tot_ops at RRDIDX@:AVERAGE: \: %8.1lf%s (avg)\n
>
> [xstatvolume]
> FNPATTERN ^xstatvolume,(.+).rrd
> TITLE Volume Data Statistics
> YAXIS Bytes/second
> -b 1024
> -r
> DEF:in at RRDIDX@=@RRDFN@:read_data:AVERAGE
> DEF:out at RRDIDX@=@RRDFN@:write_data:AVERAGE
> LINE1:in at RRDIDX@#@COLOR@:@RRDPARAM@ read
> GPRINT:in at RRDIDX@:LAST: \: %8.1lf%s (cur)
> GPRINT:in at RRDIDX@:MAX: \: %8.1lf%s (max)
> GPRINT:in at RRDIDX@:MIN: \: %8.1lf%s (min)
> GPRINT:in at RRDIDX@:AVERAGE: \: %8.1lf%s (avg)\n
> LINE1:out at RRDIDX@#@COLOR@:@RRDPARAM@ write
> GPRINT:out at RRDIDX@:LAST: \: %8.1lf%s (cur)
> GPRINT:out at RRDIDX@:MAX: \: %8.1lf%s (max)
> GPRINT:out at RRDIDX@:MIN: \: %8.1lf%s (min)
> GPRINT:out at RRDIDX@:AVERAGE: \: %8.1lf%s (avg)\n
>
> [xstatlun]
> FNPATTERN ^xstatlun,(.+).rrd
> TITLE Lun Data Statistics
> YAXIS Bytes/second
> -b 1024
> -r
> DEF:in at RRDIDX@=@RRDFN@:read_data:AVERAGE
> DEF:out at RRDIDX@=@RRDFN@:write_data:AVERAGE
> LINE1:in at RRDIDX@#@COLOR@:@RRDPARAM@ read
> GPRINT:in at RRDIDX@:LAST: \: %8.1lf%s (cur)
> GPRINT:in at RRDIDX@:MAX: \: %8.1lf%s (max)
> GPRINT:in at RRDIDX@:MIN: \: %8.1lf%s (min)
> GPRINT:in at RRDIDX@:AVERAGE: \: %8.1lf%s (avg)\n
> LINE1:out at RRDIDX@#@COLOR@:@RRDPARAM@ write
> GPRINT:out at RRDIDX@:LAST: \: %8.1lf%s (cur)
> GPRINT:out at RRDIDX@:MAX: \: %8.1lf%s (max)
> GPRINT:out at RRDIDX@:MIN: \: %8.1lf%s (min)
> GPRINT:out at RRDIDX@:AVERAGE: \: %8.1lf%s (avg)\n
>
> The client is a bit old because in the last few months I had no much time to update the client. I hope in the next few weeks to put out a new version with some patches I've received and some things that didn't get in the last version of the client.
>
> Francesco
>
>
> -----Original Message-----
> From: Peter Welter [mailto:peter.welter at gmail.com]
> Sent: Tuesday, April 07, 2009 2:15 PM
> To: hobbit at hswn.dk
> Subject: Re: [hobbit] Re: netapp.pl & xymon 4.2.3 & (no) disk graphs
>
> Hi Francesco,
>
> Thanks for replying. From hobbitgraph.cfg
>
> [disk]
> FNPATTERN disk(.*).rrd
> TITLE Disk Utilization
> YAXIS % Full
> DEF:p at RRDIDX@=@RRDFN@:pct:AVERAGE
> LINE2:p at RRDIDX@#@COLOR@:@RRDPARAM@
> -u 100
> -l 0
> GPRINT:p at RRDIDX@:LAST: \: %5.1lf (cur)
> GPRINT:p at RRDIDX@:MAX: \: %5.1lf (max)
> GPRINT:p at RRDIDX@:MIN: \: %5.1lf (min)
> GPRINT:p at RRDIDX@:AVERAGE: \: %5.1lf (avg)\n
>
> [disk1]
> FNPATTERN disk(.*).rrd
> TITLE Disk Utilization
> YAXIS Used
> DEF:p at RRDIDX@=@RRDFN@:used:AVERAGE
> CDEF:p at RRDIDX@t=p at RRDIDX@,1024,*
> LINE2:p at RRDIDX@t#@COLOR@:@RRDPARAM@
> -l 0
> GPRINT:p at RRDIDX@:LAST: \: %5.1lf KB (cur)
> GPRINT:p at RRDIDX@:MAX: \: %5.1lf KB (max)
> GPRINT:p at RRDIDX@:MIN: \: %5.1lf KB (min)
> GPRINT:p at RRDIDX@:AVERAGE: \: %5.1lf KB (avg)\n
>
> There is no xstatdisk configuration in hobbitgraph.cfg, or any other
> hobbit-config-file.
>
> Regards,
> Peter
>
> 2009/4/7 Francesco Duranti <fduranti at q8.it>:
>> Hi peter,
>> can you check the hobbitgraph.cfg file and copy/paste the [disk], [disk1] and [xstatdisk] configuration? Those should be responsible of the file pattern matching the filename to extract the data from the rrd files.
>>
>> To see the graphs of the xstat data you need to add them to the trend page adding "xstatifnet,xstatdisk,xstatqtree,xstatvolume,xstatlun" to GRAPHS in the hobbitserver.cfg (by default it's disabled because you will get a really high number of graphs).
>>
>>
>> Francesco
>>
>>
>> -----Original Message-----
>> From: Peter Welter [mailto:peter.welter at gmail.com]
>> Sent: Tuesday, April 07, 2009 11:37 AM
>> To: hobbit at hswn.dk
>> Subject: [hobbit] Re: netapp.pl & xymon 4.2.3 & (no) disk graphs
>>
>> I just found out by looking in the source (see below), that these
>> xstat-files are new in Xymon. They potentially do deliver useful extra
>> information about specific hardware disks and so on. But... when I
>> just remove all xstat*-files to a sub-directory, then all my disk
>> graphs re-appear again from the mist... so I guess I won't have to go
>> back to the previous Hobbit release ;-)
>>
>> I guess I still need some help here, but now I think Xymon gets
>> confused by all the disk-files in the rrd-directory, because we have a
>> lot real disks in these NetApp's and that should not get confused with
>> the logical disks that are created on them.
>>
>>
>> myhost:/usr/src/packages/BUILD/xymon-4.2.3/hobbitd/rrd # grep xstat *
>> do_netapp.c: do_netapp_extratest_rrd(hostname,"xstatifnet",ifnetstr,tstamp,netapp_ifnet_params,ifnet_test);
>> do_netapp.c: do_netapp_extratest_rrd(hostname,"xstatqtree",qtreestr,tstamp,netapp_qtree_params,qtree_test);
>> do_netapp.c: do_netapp_extratest_rrd(hostname,"xstataggregate",aggregatestr,tstamp,netapp_aggregate_params,aggregate_test);
>> do_netapp.c: do_netapp_extratest_rrd(hostname,"xstatvolume",volumestr,tstamp,netapp_volume_params,volume_test);
>> do_netapp.c: do_netapp_extratest_rrd(hostname,"xstatlun",lunstr,tstamp,netapp_lun_params,lun_test);
>> do_netapp.c: do_netapp_extratest_rrd(hostname,"xstatdisk",diskstr,tstamp,netapp_disk_params,disk_test);
>>
>>
>> 2009/4/7 Peter Welter <peter.welter at gmail.com>:
>>> Hello all,
>>>
>>> In our environment we also monitor 5 NetApp's using (Franscesco
>>> Duranti's) netapp.pl and everything worked fine for over 1,5 year or
>>> so. Last night I upgraded our environment from Hobbit 4.2.0 to Xymon
>>> 4.2.3.
>>>
>>> From that moment on, two of our main filers (camelot & excalibur) do
>>> NOT report any disk graphs anymore, all other graphs just work fine.
>>>
>>> Our snapvault backup filer (noah) however, is mixed up. It shows 8
>>> disk graphs, but most graphs (48...!) can not be seen...
>>>
>>> I checked the rrd-files (with rrdtool dump ...) and the disk-rrd-files
>>> are still updated; they are just not shown anymore in either the disk-
>>> and trends-column!?
>>>
>>> While checking the rrd-files, I discovered there are many
>>> xstat*-.rrd-files like xstatdisk*, xstataggregate*, xstatqtree*,
>>> xstatvolume* and xstatifnet*? I don't think it has anything to do with
>>> the above. I just thought I should mention this.
>>>
>>> Fortunately I have two other filers who are not used anymore (sort
>>> of), so I can think I can use these two filers to pinpoint the main
>>> problem. But I really do need your help in this case.
>>>
>>> -- Thank you, Peter
>>>
>>>
>>> - NO GRAPHS: CAMELOT -
>>>
>>> Filesystem Total Used Available
>>> %Used Mounted On
>>> /VUW 9274283368 8314448264 959835104 90% /VUW
>>> /vol/vol0 16777216 696860 16080356
>>> 4% /vol/vol0
>>> /vol/VUW_APPLICATIONS 251658240 159270672 92387568
>>> 63% /vol/VUW_APPLICATIONS
>>> /vol/VUW_DEPARTMENTS 1417339208 1135870820 281468388
>>> 80% /vol/VUW_DEPARTMENTS
>>> /vol/VUW_HOMES 3022918780 2901801520 121117260
>>> 96% /vol/VUW_HOMES
>>> /vol/VUW_WORKGROUPS 1202087528 1121291188 80796340
>>> 93% /vol/VUW_WORKGROUPS
>>> /vol/VUW_TEMPORARY 8388608 12684 8375924
>>> 0% /vol/VUW_TEMPORARY
>>> /vol/VUW_PROFILES 112459776 91835428 20624348
>>> 82% /vol/VUW_PROFILES
>>> /vol/VUW_DEPLOYMENT 83886080 704944 83181136
>>> 1% /vol/VUW_DEPLOYMENT
>>> /vol/VUW_SOFTWARE 150994944 65930108 85064836
>>> 44% /vol/VUW_SOFTWARE
>>> /vol/VUW_HOMES2 419430400 297936168 121494232
>>> 71% /vol/VUW_HOMES2
>>> /vol/VUW_HOMES3 125829120 5290488 120538632
>>> 4% /vol/VUW_HOMES3
>>> /vol/VUW_PROFILES2 125829120 85202468 40626652
>>> 68% /vol/VUW_PROFILES2
>>> /vol/VUW_PROFILES3 41943040 872628 41070412
>>> 2% /vol/VUW_PROFILES3
>>>
>>> netapp.pl version 1.10 - column disk lifetime 60, tested in ~ 00:00:00
>>> (max 00:02:00)
>>>
>>> - NO GRAPHS: EXCALIBUR -
>>>
>>> Filesystem Total Used Available
>>> %Used Mounted On
>>> /UB 1309580492 1066842844 242737648 81% /UB
>>> /SAP 2976319296 2575401928 400917368 87% /SAP
>>> /ULCN_en_VMware 5238716824 1943827540 3294889284
>>> 37% /ULCN_en_VMware
>>> /vol/ub_test_aleph2 188743680 177386228 11357452
>>> 94% /vol/ub_test_aleph2
>>> /vol/vol0 16777216 3751760 13025456
>>> 22% /vol/vol0
>>> /vol/ub_prod_arc2 150994944 93639628 57355316
>>> 62% /vol/ub_prod_arc2
>>> /vol/sra 125829120 92536612 33292508
>>> 74% /vol/sra
>>> /vol/ub_prod_aleph2 330301440 230768244 99533196
>>> 70% /vol/ub_prod_aleph2
>>> /vol/sld 52428800 31220604 21208196
>>> 60% /vol/sld
>>> /vol/eco 314572800 226719852 87852948
>>> 72% /vol/eco
>>> /vol/eca 419430400 304858736 114571664
>>> 73% /vol/eca
>>> /vol/ecp 524288000 393638904 130649096
>>> 75% /vol/ecp
>>> /vol/srp 136314880 81797720 54517160
>>> 60% /vol/srp
>>> /vol/sro 104857600 70797956 34059644
>>> 68% /vol/sro
>>> /vol/epo 52428800 33606396 18822404
>>> 64% /vol/epo
>>> /vol/epa 52428800 35038276 17390524
>>> 67% /vol/epa
>>> /vol/epp 52428800 33691756 18737044
>>> 64% /vol/epp
>>> /vol/bwo 167772160 136329488 31442672
>>> 81% /vol/bwo
>>> /vol/tro 20971520 3338920 17632600
>>> 16% /vol/tro
>>> /vol/xio 94371840 81723648 12648192
>>> 87% /vol/xio
>>> /vol/bwp 335544320 242048196 93496124
>>> 72% /vol/bwp
>>> /vol/solaris 115343360 27299736 88043624
>>> 24% /vol/solaris
>>> /vol/tra 20971520 3041028 17930492
>>> 15% /vol/tra
>>> /vol/trp 20971520 3999036 16972484
>>> 19% /vol/trp
>>> /vol/xip 78643200 53005168 25638032
>>> 67% /vol/xip
>>> /vol/dba 78643200 23371816 55271384
>>> 30% /vol/dba
>>> /vol/ub_test_aleph3 335544320 154981876 180562444
>>> 46% /vol/ub_test_aleph3
>>> /vol/vmware_test_marc_hooy 234881024 47024396 187856628
>>> 20% /vol/vmware_test_marc_hooy
>>> /vol/ulcn_mailstore 1089260752 821614700 267646052
>>> 75% /vol/ulcn_mailstore
>>> /vol/ulcn_surf 52428800 2432 52426368
>>> 0% /vol/ulcn_surf
>>> /vol/sapbwa 157286400 135781296 21505104
>>> 86% /vol/sapbwa
>>> /vol/sapxia 136314880 94309300 42005580
>>> 69% /vol/sapxia
>>> /vol/ulcn_ota_mailstore 20971520 104 20971416
>>> 0% /vol/ulcn_ota_mailstore
>>> /vol/saptra 16777216 3039008 13738208
>>> 18% /vol/saptra
>>>
>>> netapp.pl version 1.10 - column disk lifetime 60, tested in ~ 00:00:00
>>> (max 00:02:00)
>>>
>>> - SOME GRAPHS: NOAH (disks marked with ' ***' at the end) -
>>>
>>> Filesystem Total Used Available
>>> %Used Mounted On
>>> /backupaggr_2_kog 8162374688 1851811324 6310563364
>>> 23% /backupaggr_2_kog
>>> /backupaggr_3_kog 8162374688 315328000 7847046688
>>> 4% /backupaggr_3_kog
>>> /backupaggr_kog 12885604304 11654758296 1230846008
>>> 90% /backupaggr_kog
>>> /vol/vol0 16777216 797048 15980168
>>> 5% /vol/vol0
>>> /vol/ULCN_MAILSTORE 1717567488 1343487832 374079656
>>> 78% /vol/ULCN_MAILSTORE
>>> /vol/SAP_PENSITO 104857600 28819648 76037952
>>> 27% /vol/SAP_PENSITO
>>> /vol/VUWNAS02_APPLICATIONS 262144000 161875620 100268380
>>> 62% /vol/VUWNAS02_APPLICATIONS
>>> /vol/SAP_SAPBWA 419430400 272891352 146539048
>>> 65% /vol/SAP_SAPBWA
>>> /vol/SAP_SAPXIA 120586240 114195504 6390736
>>> 95% /vol/SAP_SAPXIA
>>> /vol/UFB_UFBSVR01 104857600 14564464 90293136
>>> 14% /vol/UFB_UFBSVR01
>>> /vol/IBS_NIHIL 52428800 2303108 50125692
>>> 4% /vol/IBS_NIHIL ***
>>> /vol/IGRSFS101 471859200 141582412 330276788
>>> 30% /vol/IGRSFS101
>>> /vol/IBS_ORWELL 104857600 26580588 78277012
>>> 25% /vol/IBS_ORWELL
>>> /vol/IBS_NEXUS 52428800 12342608 40086192
>>> 24% /vol/IBS_NEXUS ***
>>> /vol/IBS_NUNQUAM 52428800 120 52428680
>>> 0% /vol/IBS_NUNQUAM ***
>>> /vol/VUW_EXCHANGE 157286400 295364 156991036
>>> 0% /vol/VUW_EXCHANGE
>>> /vol/IBS_NUSQUAM 52428800 1892756 50536044
>>> 4% /vol/IBS_NUSQUAM ***
>>> /vol/IBS_NAGGER 31457280 1378740 30078540
>>> 4% /vol/IBS_NAGGER ***
>>> /vol/VUW_VUWSDC02 524288000 491685452 32602548
>>> 94% /vol/VUW_VUWSDC02
>>> /vol/VUWNAS02_DEPARTMENTS 1287651328 1213016980 74634348
>>> 94% /vol/VUWNAS02_DEPARTMENTS
>>> /vol/VUWNAS02_HOMES 3328180224 3211611536 116568688
>>> 96% /vol/VUWNAS02_HOMES
>>> /vol/GAMMA 68157440 218848 67938592
>>> 0% /vol/GAMMA ***
>>> /vol/VUWNAS02_WORKGROUPS 1258291200 1184439936 73851264
>>> 94% /vol/VUWNAS02_WORKGROUPS
>>> /vol/VUWNAS02_PROFILES 314572800 296196216 18376584
>>> 94% /vol/VUWNAS02_PROFILES
>>> /vol/VUWNAS02_DEPLOYMENT 419430400 15745112 403685288
>>> 4% /vol/VUWNAS02_DEPLOYMENT
>>> /vol/VUWNAS02_SOFTWARE 125829120 66229432 59599688
>>> 53% /vol/VUWNAS02_SOFTWARE
>>> /vol/UB_PROD_ALEPH2 838860800 516906036 321954764
>>> 62% /vol/UB_PROD_ALEPH2
>>> /vol/VUW_VUWSDC01 545259520 507335816 37923704
>>> 93% /vol/VUW_VUWSDC01
>>> /vol/VUW_VUWSDC04 73400320 62864376 10535944
>>> 86% /vol/VUW_VUWSDC04
>>> /vol/VUW_VUWSMC02 62914560 50989600 11924960
>>> 81% /vol/VUW_VUWSMC02
>>> /vol/VUW_VUWSOM01 15728640 12395368 3333272
>>> 79% /vol/VUW_VUWSOM01
>>> /vol/UB_PROD_ARC2 503316480 454875292 48441188
>>> 90% /vol/UB_PROD_ARC2
>>> /vol/VUW_VUWSWP02 47185920 35898380 11287540
>>> 76% /vol/VUW_VUWSWP02
>>> /vol/VUW_VUWSWP03 314572800 266760000 47812800
>>> 85% /vol/VUW_VUWSWP03
>>> /vol/VUW_VUWSDS03 104857600 24002548 80855052
>>> 23% /vol/VUW_VUWSDS03
>>> /vol/VUW_VUWSDC03 676331520 628669724 47661796
>>> 93% /vol/VUW_VUWSDC03
>>> /vol/VUW_VUWSDC05 104857600 45930952 58926648
>>> 44% /vol/VUW_VUWSDC05
>>> /vol/VUW_VUWSMC01 157286400 84099620 73186780
>>> 53% /vol/VUW_VUWSMC01
>>> /vol/IBS_INSULA 104857600 16690432 88167168
>>> 16% /vol/IBS_INSULA ***
>>> /vol/VUW_VUWSOM02 524288000 361859348 162428652
>>> 69% /vol/VUW_VUWSOM02
>>> /vol/VUW_VUWSOM03 20971520 10001972 10969548
>>> 48% /vol/VUW_VUWSOM03
>>> /vol/UB_DELOS 104857600 47352240 57505360
>>> 45% /vol/UB_DELOS
>>> /vol/ICLON_ICLAPP01 73400320 53232408 20167912
>>> 73% /vol/ICLON_ICLAPP01
>>> /vol/ICLON_ICLWEB01 62914560 46468948 16445612
>>> 74% /vol/ICLON_ICLWEB01
>>> /vol/VMWARE_VIRTUALCENTER 104857600 19512 104838088
>>> 0% /vol/VMWARE_VIRTUALCENTER
>>> /vol/VUWNAS02_HOMES2 576716800 355405576 221311224
>>> 62% /vol/VUWNAS02_HOMES2
>>> /vol/VUWNAS02_HOMES3 209715200 6830260 202884940
>>> 3% /vol/VUWNAS02_HOMES3
>>> /vol/VUWNAS02_PROFILES2 157286400 140940584 16345816
>>> 90% /vol/VUWNAS02_PROFILES2
>>> /vol/VUWNAS02_PROFILES3 52428800 3244172 49184628
>>> 6% /vol/VUWNAS02_PROFILES3
>>> /vol/SAP_FERA 419430400 86547324 332883076
>>> 21% /vol/SAP_FERA
>>> /vol/SAP_PECUNIA 146800640 39624548 107176092
>>> 27% /vol/SAP_PECUNIA
>>> /vol/VOIP_DORIS 104857600 68756404 36101196
>>> 66% /vol/VOIP_DORIS
>>> /vol/VOIP_DIONYSUS 83886080 33812768 50073312
>>> 40% /vol/VOIP_DIONYSUS
>>> /vol/VOIP_TELESTO 104857600 65484616 39372984
>>> 62% /vol/VOIP_TELESTO
>>> /vol/TNB_HESTIA 209715200 94268 209620932
>>> 0% /vol/TNB_HESTIA
>>> /vol/UB_TEST_ALEPH 419430400 195480256 223950144
>>> 47% /vol/UB_TEST_ALEPH
>>> /vol/WEB_KERESIS 104857600 4617936 100239664
>>> 4% /vol/WEB_KERESIS
>>> /vol/ERBIS_THOR 157286400 23998732 133287668
>>> 15% /vol/ERBIS_THOR
>>> /vol/UB_ARCHIMEDES 52428800 1580596 50848204
>>> 3% /vol/UB_ARCHIMEDES
>>> /vol/SAP_SAPTRA 31457280 2939920 28517360
>>> 9% /vol/SAP_SAPTRA
>>> /vol/WEB_MIRAMAR 52428800 1484252 50944548
>>> 3% /vol/WEB_MIRAMAR
>>> /vol/WEB_PENTHARIAN 52428800 1472288 50956512
>>> 3% /vol/WEB_PENTHARIAN
>>> /vol/WEB_APOLLYON 52428800 2899836 49528964
>>> 6% /vol/WEB_APOLLYON
>>> /vol/IBS_NIRVANA 62914560 1231284 61683276
>>> 2% /vol/IBS_NIRVANA ***
>>> /vol/IGRSFS001_ORG 367001600 136379928 230621672
>>> 37% /vol/IGRSFS001_ORG
>>> /vol/IGRSFS001 142606336 116948784 25657552
>>> 82% /vol/IGRSFS001
>>> /vol/koala 104857600 14221836 90635764
>>> 14% /vol/koala
>>>
>>> netapp.pl version 1.10 - column disk lifetime 60, tested in ~ 00:00:00
>>> (max 00:02:00)
>>>
>>>
>>> Normal behaviour:
>>> nas-node1:
>>> Filesystem Total Used Available
>>> %Used Mounted On
>>> /vol/vuwanas01 228589568 190858164 37731404
>>> 83% /vol/vuwanas01
>>> /vol/vol0 30198992 1039268 29159724
>>> 3% /vol/vol0
>>>
>>> netapp.pl version 1.10 - column disk lifetime 60, tested in ~ 00:00:00
>>> (max 00:02:00)
>>> nas-node2:
>>> Filesystem Total Used Available
>>> %Used Mounted On
>>> /vol/vol0 30198992 548304 29650688
>>> 2% /vol/vol0
>>>
>>> netapp.pl version 1.10 - column disk lifetime 60, tested in ~ 00:00:00
>>> (max 00:02:00)
>>>
>>
>> To unsubscribe from the hobbit list, send an e-mail to
>> hobbit-unsubscribe at hswn.dk
>>
>>
>>
>> To unsubscribe from the hobbit list, send an e-mail to
>> hobbit-unsubscribe at hswn.dk
>>
>>
>>
>
> To unsubscribe from the hobbit list, send an e-mail to
> hobbit-unsubscribe at hswn.dk
>
>
>
> To unsubscribe from the hobbit list, send an e-mail to
> hobbit-unsubscribe at hswn.dk
>
>
>
More information about the Xymon
mailing list