I ran into the same problem monitoring my HP-UX servers, The fix for me was to add these lines to the hobbitserver.cfg file.<div><br></div><div><div>MAXLINE="32768"</div><div>MAXMSG_STATUS="2048"</div><div>
MAXMSG_CLIENT="2048"</div><div>MAXMSG_DATA="2048"</div><div><br></div><br><div class="gmail_quote">On Thu, Mar 25, 2010 at 12:23 AM, Harald Villemoes <span dir="ltr"><<a href="mailto:hvillemoes@csc.com">hvillemoes@csc.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><br>
Hi<br>
<br>
We have a problem with distorted data after upgrading to Xymon<br>
4.3.0-0.beta2.<br>
<br>
The data from HPUX is received correctly at the server, but is shown as in<br>
error.<br>
<br>
Data in hostdata is correct, while data in histlogs is distorted.<br>
The problem is highly transient, e.g. 3 times per week.<br>
<br>
Our server is running CentOS release 5.2 (Final) - Linux<br>
<a href="http://fulda.scandihealth.com" target="_blank">fulda.scandihealth.com</a> 2.6.18-92.1.18.el5PAE #1 SMP Wed Nov 12 10:02:30 EST<br>
2008 i686 athlon i386 GNU/Linux<br>
<br>
Please see filedumps below.<br>
<br>
Can you provide a fix or workaround or otherwise guide us to avoiding these<br>
false alarms..<br>
<br>
Best regards<br>
<br>
HARALD VILLEMOES<br>
CSC<br>
<br>
ScandiHealth | +45 36147236 | +45 29236962 | <a href="mailto:hvillemoes@csc.com">hvillemoes@csc.com</a> |<br>
<a href="http://www.csc.dk" target="_blank">www.csc.dk</a><br>
[root@fulda ceres]# ls -la 1269499546<br>
-rw-rw-r-- 1 hobbit hobbit 41873 Mar 25 07:45 1269499546<br>
[root@fulda ceres]# pwd<br>
/hobbit/data/hostdata/ceres<br>
[root@fulda ceres]# head -30 1269499546<br>
client ceres.hp-ux hp-ux<br>
[date]<br>
Thu Mar 25 07:45:33 MET 2010<br>
[uname]<br>
HP-UX ceres B.11.00 A 9000/800 138981577 two-user license<br>
[uptime]<br>
7:45am up 935 days, 17:14, 0 users, load average: 0.23, 0.18, 0.15<br>
[who]<br>
[df]]<br>
Filesystem 1024-blocks Used Available Capacity Mounted on.<br>
/dev/vg01/lvol9 61908 1617 60291 3% /usr/SAG/rsp<br>
/dev/vg01/lvol6 99259 38147 61112 39% /appl<br>
/dev/vg01/lvol5 3032126 1238051 1794075 41% /export<br>
/dev/vg01/lvol4 616384 31327 585057 6% /home/editrade<br>
/dev/vg01/lvol7 124232 2660 121572 3% /home/mms/tmp<br>
/dev/vg01/lvol3 1010120 375392 634728 38% /home/mms<br>
/dev/vg00/lvol4 1222476 1044716 177760 86% /home<br>
/dev/vg01/lvol2 61584 1191 60393 2% /mmswork1<br>
/dev/vg00/lvol7 870539 411565 458974 48% /opt<br>
/dev/vg01/test 8085494 4921377 3164117 61% /test<br>
/dev/vg00/lvol5 246682 70162 176520 29% /tmp<br>
/dev/vg01/lvol1 497161 85698 411463 18% /usr/SAG<br>
/dev/vg00/lvol8 1209193 914194 294999 76% /usr<br>
/dev/vg00/lvol6 2027244 934114 1093130 47% /var<br>
/dev/vg00/lvol1 75359 35165 40194 47% /stand<br>
/dev/vg00/lvol3 135982 25256 110726 19% /<br>
[mount]<br>
/ on /dev/vg00/lvol3 log on Sat Sep 1 15:31:47 2007%<br>
/stand on /dev/vg00/lvol1 defaults on Sat Sep 1 15:31:48 2007<br>
/var on /dev/vg00/lvol6 delaylog on Sat Sep 1 15:32:00 2007<br>
[root@fulda ceres]#<br>
[root@fulda disk]# pwd<br>
/hobbit/data/histlogs/ceres/disk<br>
[root@fulda disk]# ls -la Thu_Mar_25_07:45:46_2010<br>
-rw-rw-r-- 1 hobbit hobbit 2324 Mar 25 07:45 Thu_Mar_25_07:45:46_2010<br>
[root@fulda disk]# cat Thu_Mar_25_07:45:46_2010<br>
red Thu Mar]25 07:45:33 MET 2010 - Filesystems NOT ok<br>
&red 3% /usr/SAG/rsp (60291% used) has reached the PANIC level (95%)<br>
&red 39% /appl (61112% used) has reached the PANIC level (95%)<br>
&red 41% /export (1794075% used) has reached the PANIC level (95%)%)<br>
&red 6% /home/editrade (585057% used) has reached the PANIC level (95%)<br>
&red 3% /home/mms/tmp (121572% used) has reached the PANIC level (95%))<br>
&red 38% /home/mms (634728% used) has reached the PANIC level (95%)<br>
&red 86% /home (177760% used) has reached the PANIC level (95%)<br>
&red 2% /mmswork1 (60393% used) has reached the PANIC level (95%)%)<br>
&red 48% /opt (458974% used) has reached the PANIC level (95%)<br>
&red 61% /test (3164117% used) has reached the PANIC level (95%))%)<br>
&red 29% /tmp (176520% used) has reached the PANIC level (95%)%))%)<br>
&red 18% /usr/SAG (411463% used) has reached the PANIC level (95%))<br>
&red 76% /usr (294999% used) has reached the PANIC level (95%)<br>
&red 47% /var (1093130% used) has reached the PANIC level (95%)<br>
&red 47% /stand (40194% used) has reached the PANIC level (95%)<br>
&red 19% / (110726% used) has reached the PANIC level (95%)<br>
<br>
Filesystem 10<br>
4-bl]cks Used Available Capacity Mounted on<br>
/dev/vg01/lvol9 61908 1617 60291 3% /usr/SAG/rsp))<br>
/dev/vg01/lvol6 99259 38147 61112 39% /appl<br>
/dev/vg01/lvol5 3032126 1238051 1794075 41% /export<br>
/dev/vg01/lvol4 616384 31327 585057 6% /home/editrade%))<br>
/dev/vg01/lvol7 124232 2660 121572 3% /home/mms/tmp<br>
/dev/vg01/lvol3 1010120 375392 634728 38% /home/mms<br>
/dev/vg00/lvol4 1222476 1044716 177760 86% /home<br>
/dev/vg01/lvol2 61584 1191 60393 2% /mmswork1<br>
/dev/vg00/lvol7 870539 411565 458974 48% /opt<br>
/dev/vg01/test 8085494 4921377 3164117 61% /test<br>
/dev/vg00/lvol5 246682 70162 176520 29% /tmp<br>
/dev/vg01/lvol1 497161 85698 411463 18% /usr/SAG<br>
/dev/vg00/lvol8 1209193 914194 294999 76% /usr<br>
/dev/vg00/lvol6 2027244 934114 1093130 47% /var<br>
/dev/vg00/lvol1 75359 35165 40194 47% /stand<br>
/dev/vg00/lvol3 135982 25256 110726 19% /<br>
Status unchanged in 0.00 minutes<br>
Message received from 195.50.45.230<br>
Client data ID 1269499546.<br>
[root@fulda disk]#<br>
<br>
<br>
CSC • This is a PRIVATE message. If you are not the intended recipient,<br>
please delete without copying and kindly advise us by e-mail of the mistake<br>
in delivery. NOTE: Regardless of content, this e-mail shall not operate to<br>
bind CSC to any order or other contract unless pursuant to explicit written<br>
agreement or government initiative expressly permitting the use of e-mail<br>
for such purpose • CSC Consulting Group A/S • Registered Office: Retortvej<br>
8, 1780 Copenhagen V, Denmark • Registered in Denmark No: 26031362</blockquote></div><br><br clear="all"><br>-- <br>Chris Naude<br>
</div>