[hobbit] Re: afs_fsmon ext-script

Martin Flemming martin.flemming at desy.de
Mon Nov 17 08:53:09 CET 2008


Hi !

Problem solved :-)


I've had to remove all empty entries


freepackets,calls,sentbusy,sentabort,sentackall,sentchallenge,sentresponse

of my output from my afss-graph-definition in hobbitgraph.cfg

  Mon Nov 17 08:02:52 2008


freepackets:
callswaiting: 0
threadsidle: 123
serverconnections: 9911
clientconnections: 2011
peerstructs: 1603
callstructs: 16914
freecalls: 5122
packetallocation failures: 0
calls:
allocs: 1895951541
readdata: 94890489
readack: 249968598
readdup: 69775
readspurious: 18528
readbusy: 451
readabort: 10155061
readackall: 6586
readchallenge: 573171
readresponse: 1980478
sentdata: 1956795228
sentresent: 8579980
sentack: 834420997
sentbusy:
sentabort:
sentackall:
sentchallenge:
sentresponse:



and after that , i've got the pretty graph !

I've solved that issue with follwing debug-command from the mailinglist

QUERY_STRING="host=afs-node-1&service=ncv:afss&graph=hourly&action=view" REQUEST_METHOD=GET SCRIPT_NAME="" REQUEST_URI="" ~hobbit/cgi-bin/hobbitgraph.sh

and the output was

> Expires: Mon, 17 Nov 2008 07:16:46 GMT
>
> Content-type: text/html
>
> <html><head><title>Invalid request</title></head>
> <body>No DS called 'freepackets' in 'afss.rrd'</body></html>



.. good to know now, that the hobbitgraph.cgi/hobbitgraph.cfg are very sensitive in fact of empty values ...


cheers,
 	martin




On Sun, 16 Nov 2008, Martin Flemming wrote:

>
> Hi !
>
>
> Grmmph,
>
> .. something is missing in my enviroment ...
>
> my afss.rrd looks good but i've got no graph for it ...
>
>
> i can see with
>
> bbcmd hobbitd_channel --debug --channel=status hobbitd_capture --hosts= 
> afs-node-1
>
> ## @@status#33428/afs-node-1 1226767038.360131 X.X.X.X  afs-node-1 afss 
> 1226768838 green  green 1226683362 0  0  1226766743 0 sunos AFS
> status afs-node-1.afss green Sat Nov 15 17:37:18 2008
>
>
> freepackets:
> callswaiting: 0
> threadsidle: 123
> serverconnections: 9786
> clientconnections: 1981
> peerstructs: 1590
> callstructs: 16914
> freecalls: 5237
> packetallocation failures: 0
> calls:
> allocs: 1810578738
> readdata: 66997530
> readack: 213812807
> readdup: 69511
> readspurious: 18444
> readbusy: 451
> readabort: 10128366
> readackall: 6553
> readchallenge: 567192
> readresponse: 1963434
> sentdata: 1844522702
> sentresent: 8569645
> sentack: 827892863
> sentbusy:
> sentabort:
> sentackall:
> sentchallenge:
> sentresponse:
>
>
> .. everything should be ok ...
>
> in my hobbitserver.cfg
>
> TEST2RRD="...,afss=ncv,
>
> GRAPHS="..,afss,
>
> NCV_afss="freepackets:GAUGE,callswaiting:GAUGE,threadsidle:GAUGE,serverconnections:GAUGE,clientconnections:GAUGE,peerstructs:GAUGE,callstructs:GAUGE,
> freecalls:GAUGE,calls:GAUGE,allocs:GAUGE,readdata:GAUGE,readack:GAUGE,readdup:GAUGE,readspurious:GAUGE,readbusy:GAUGE,readabort:GAUGE,readackall:GAUGE,
> readchallenge:GAUGE,readresponse:GAUGE,sentdata:GAUGE,sentresent:GAUGE,sentack:GAUGE,sentbusy:GAUGE,sentabort:GAUGE,sentackall:GAUGE,sentchallenge:GAUGE,
> sentresponse:GAUGE"
>
>
>
>
> In my rrd-status.log, i can see the afss.rrd will be pretty updated
>
> 2008-11-16 15:00:56 Flushing '/afs-node-1/afss.rrd' with 4 updates pending, 
> template 
> 'callswaiting:threadsidle:serverconnections:clientconnections:peerstructs:callstructs:freecalls:packetallocationfai:allocs:readdata:readack:readdup:readspurious:readbusy:readabort:readackall:readchallenge:readresponse:sentdata:sentresent:sentack'
>
>
> rrdtool info /usr/lib/hobbit/server/data/rrd/afs-node-1/afss.rrd| grep type
>
> ds[callswaiting].type = "GAUGE"
> ds[threadsidle].type = "GAUGE"
> ds[serverconnections].type = "GAUGE"
> ds[clientconnections].type = "GAUGE"
> ds[peerstructs].type = "GAUGE"
> ds[callstructs].type = "GAUGE"
> ds[freecalls].type = "GAUGE"
> ds[packetallocationfai].type = "DERIVE"
> ds[allocs].type = "GAUGE"
> ds[readdata].type = "GAUGE"
> ds[readack].type = "GAUGE"
> ds[readdup].type = "GAUGE"
> ds[readspurious].type = "GAUGE"
> ds[readbusy].type = "GAUGE"
> ds[readabort].type = "GAUGE"
> ds[readackall].type = "GAUGE"
> ds[readchallenge].type = "GAUGE"
> ds[readresponse].type = "GAUGE"
> ds[sentdata].type = "GAUGE"
> ds[sentresent].type = "GAUGE"
> ds[sentack].type = "GAUGE"
>
>
>
> and my hobbitgraph.cfg
>
> [afss]
>         TITLE AFS fileservers calls waiting for thread
>         YAXIS calls waiting for thread
>        DEF: callswaiting=afss.rrd:callswaiting:AVERAGE
>        DEF: freePackets=afss.rrd:freepackets:AVERAGE
>        DEF: threadsidle=afss.rrd:threadsidle:AVERAGE
>        DEF: serverConnections=afss.rrd:serverconnections:AVERAGE
>        DEF: clientConnections=afss.rrd:clientconnections:AVERAGE
>        DEF: peerStructs=afss.rrd:peerstructs:AVERAGE
>        DEF: callStructs=afss.rrd:callstructs:AVERAGE
>        DEF: freeCall=afss.rrd:freecalls:AVERAGE
>        DEF: calls=afss.rrd:calls:AVERAGE
>        DEF: allocs=afss.rrd:allocs:AVERAGE
>        DEF: readdata=afss.rrd:readdata:AVERAGE
>        DEF: readack=afss.rrd:readack:AVERAGE
>        DEF: readdup=afss.rrd:readdup:AVERAGE
>        DEF: readspurious=afss.rrd:readspurious:AVERAGE
>        DEF: readbusy=afss.rrd:readbusy:AVERAGE
>        DEF: readabort=afss.rrd:readabort:AVERAGE
>        DEF: readackall=afss.rrd:readackall:AVERAGE
>        DEF: readchallenge=afss.rrd:readchallenge:AVERAGE
>        DEF: readresponse=afss.rrd:readresponse:AVERAGE
>        DEF: sentdata=afss.rrd:sentdata:AVERAGE
>        DEF: sentresent=afss.rrd:sentresent:AVERAGE
>        DEF: sentack=afss.rrd:sentack:AVERAGE
>        DEF: sentbusy=afss.rrd:sentbusy:AVERAGE
>        DEF: sentabort=afss.rrd:sentabort:AVERAGE
>        DEF: sentackall=afss.rrd:sentackall:AVERAGE
>        DEF: sentchallenge=afss.rrd:sentchallenge:AVERAGE
>        DEF: sentresponse=afss.rrd:sentresponse:AVERAGE
>        LINE2: callswaiting#000000:Calls waiting
>        LINE2: freePackets#0000FF:Free Packets
>        LINE2: threadsidle#008000:Idle Threads
>        LINE2: serverConnections#0080FF:Server connections
>        LINE2: clientConnections#00FF00:Client connections
>        LINE2: peerStructs#00FFFF:Peer Structs
>        LINE2: callStructs#800000:Call Structs
>        LINE2: freeCall#8000FF:Free Calls
>        LINE2: calls#8080FF:Calls
>        LINE2: allocs#80FF00:All Locs
>        LINE2: readdata#80FFFF:Read Data
>        LINE2: readack#FF0000:Read Ack
>        LINE2: readdup#FF00FF:Read Duplicates
>        LINE2: readspurious#FF8000:Read Spurious
>        LINE2: readbusy#808000:Read Busy
>        LINE2: readabort#8080FF:Read Abort
>        LINE2: readackall#80FF00:Read Ack All
>        LINE2: readchallenge#80FFFF:Read Challenge
>        LINE2: readresponse#FF0000:Read Response
>        LINE2: sentdata#80FFFF:Sent Data
>        LINE2: sentresent#FF00FF:Read Resent
>        LINE2: sentack#FF0060:Sent Ack
>        LINE2: sentbusy#FF8090:Sent Busy
>        LINE2: sentabort#8030FF:Sent Abort
>        LINE2: sentackall#802F00:Sent Ack All
>        LINE2: sentchallenge#83eFFF:Sent Challenge
>        LINE2: sentresponse#eeee00:Sent Response
>       LINE2:peerStructs#00FFFF:Peer Structs
>        LINE2: callStructs#800000:Call Structs
>        LINE2: freeCall#8000FF:Free Calls
>        LINE2: calls#8080FF:Calls
>        LINE2: allocs#80FF00:All Locs
>        LINE2: readdata#80FFFF:Read Data
>        LINE2: readack#FF0000:Read Ack
>        LINE2: readdup#FF00FF:Read Duplicates
>        LINE2: readspurious#FF8000:Read Spurious
>        LINE2: readbusy#808000:Read Busy
>        LINE2: readabort#8080FF:Read Abort
>        LINE2: readackall#80FF00:Read Ack All
>        LINE2: readchallenge#80FFFF:Read Challenge
>        LINE2: readresponse#FF0000:Read Response
>        LINE2: sentdata#80FFFF:Sent Data
>        LINE2: sentresent#FF00FF:Read Resent
>        LINE2: sentack#FF0060:Sent Ack
>        LINE2: sentbusy#FF8090:Sent Busy
>        LINE2: sentabort#8030FF:Sent Abort
>        LINE2: sentackall#802F00:Sent Ack All
>        LINE2: sentchallenge#83eFFF:Sent Challenge
>        LINE2: sentresponse#eeee00:Sent Response
>        GPRINT:callswaiting: LAST:callswaiting \: %5.1lf (cur)
>        GPRINT:callswaiting: MAX:callswaiting \: %5.1lf (cur)
>        GPRINT:callswaiting: MIN:callswaiting \: %5.1lf (cur)
>        GPRINT:callswaiting: AVERAGE:callswaiting \: %5.1lf (cur)
>        GPRINT:freePackets: LAST:freePackets \: %5.1lf (cur)
>        GPRINT:freePackets: MAX:freePackets \: %5.1lf (cur)
>        GPRINT:freePackets: MIN:freePackets \: %5.1lf (cur)
>        GPRINT:freePackets: AVERAGE:freePackets \: %5.1lf (cur)
>        GPRINT:threadsidle: LAST:threadsidle \: %5.1lf (cur)
>        GPRINT:threadsidle: MAX:threadsidle \: %5.1lf (cur)
>        GPRINT:threadsidle: MIN:threadsidle \: %5.1lf (cur)
>        GPRINT:threadsidle: AVERAGE:threadsidle \: %5.1lf (cur)
>        GPRINT:serverConnections: LAST:serverConnections \: %5.1lf (cur)
>        GPRINT:serverConnections: MAX:serverConnections \: %5.1lf (cur)
>        GPRINT:serverConnections: MIN:serverConnections \: %5.1lf (cur)
>        GPRINT:serverConnections: AVERAGE:serverConnections \: %5.1lf (cur)
>        GPRINT:clientConnections: LAST:clientConnections \: %5.1lf (cur)
>        GPRINT:clientConnections: MAX:clientConnections \: %5.1lf (cur)
>        GPRINT:clientConnections: MIN:clientConnections \: %5.1lf (cur)
>        GPRINT:clientConnections: AVERAGE:clientConnections \: %5.1lf (cur)
>        GPRINT:peerStructs: LAST:peerStructs \: %5.1lf (cur)
>        GPRINT:peerStructs: MAX:peerStructs \: %5.1lf (cur)
>       GPRINT:peerStructs:MIN:peerStructs \: %5.1lf (cur)
>         GPRINT:peerStructs:AVERAGE:peerStructs \: %5.1lf (cur)
>         .
> 	 .
> 	 . and so on ...
>
>
> What's going on or how can it more debug ?
>
>
> Thanks in Advance
>
> 	 cheers,
> 		 martin
>
>
> On Fri, 14 Nov 2008, Henrik Størner wrote:
>
>>  In <4202.1226587233 at satai.its.iastate.edu> "Tracy J. Di Marco White"
>>  <gendalia at iastate.edu> writes:
>> 
>> 
>> >  In message <Pine.LNX.4.64.0811130823530.4753 at titan.desy.de>, Martin 
>> >  Flemming writes:
>> > } 
>> > } Hi, Tracy et all ...
>> 
>> >  Hi Martin!
>> 
>> > } Nice that you are on the list again,
>> > } dosen't you disappear from it for one or two years ....
>> > } 
>> > } So i take another chance to ask you,
>> > } for your configuration of the various graphs of your afs_fsmon-script 
>> > } ...
>> 
>> >  It is the most useless graph ever. I should break it out into multiple
>> >  RRDs, but then I don't know how to display them.
>> >  [afs}
>> >         TITLE AFS fileservers calls waiting for thread
>> >         YAXIS calls waiting for thread
>> >         DEF:callswaiting=afs.rrd:callswaiting:AVERAGE
>>
>>  [lots of DEF and LINE settings deleted]
>>
>>  You dont have to break it out into multiple RRD files. You can just
>>  do several graph definitions - you dont have to include every item
>>  you monitor in a graph. Just pick the ones that you want on one
>>  graph, and leave it out the others.
>>
>>  E.g. there are multiple vmstat graphs, each with their own set of
>>  data pulled from the same "vmstat.rrd" file.
>>
>>  You can then also use the TRENDS setting in bb-hosts to choose
>>  which of the many graphs you want to show up on the "trends" page.
>>  E.g. if you have an [afs1], [afs2] and [afs3] graph, then you
>>  can have (in bb-hosts):
>>
>>   0.0.0.0 myafsbox # TRENDS:*,afs:afs1,afs3
>>
>>  and it will then show the afs1 and afs3 graphs on the trends page.
>> 
>>
>>  Regards,
>>  Henrik
>> 
>>
>>  To unsubscribe from the hobbit list, send an e-mail to
>>  hobbit-unsubscribe at hswn.dk
>> 
>> 
>> 
>
> Gruss
>
>       Martin Flemming
>
>
> ______________________________________________________
> Martin Flemming
> DESY / IT          office : Building 2b / 008a
> Notkestr. 85       phone  : 040 - 8998 - 4667
> 22603 Hamburg      mail   : martin.flemming at desy.de
> ______________________________________________________
>

Gruss

        Martin Flemming


______________________________________________________
Martin Flemming
DESY / IT          office : Building 2b / 008a
Notkestr. 85       phone  : 040 - 8998 - 4667
22603 Hamburg      mail   : martin.flemming at desy.de
______________________________________________________


More information about the Xymon mailing list