[Xymon] Big Environment

Gautier Begin gbegin at csc.com
Wed Oct 5 09:22:49 CEST 2011


Hello,


Thanks for sharing your experiences.

Here, I can't use any SSD disk but SAN ones with cache before. What do you 
think about it?

Cordialement, Regards,Mit freundlichen Grüßen,

Gautier BEGIN

Admin and Tools Team
CSC Computer Sciences Luxembourg S.A.
12D Impasse Drosbach
L-1882 Luxembourg

Global Outsourcing Service | p:+352 24 834 276 | m:+352 621 229 172 | 
gbegin at csc.com | www.csc.com


CSC • This is a PRIVATE message. If you are not the intended recipient, 
please delete without copying and kindly advise us by e-mail of the 
mistake in delivery.  NOTE: Regardless of content, this e-mail shall not 
operate to bind CSC to any order or other contract unless pursuant to 
explicit written agreement or government initiative expressly permitting 
the use of e-mail for such purpose
 • 
CSC Computer Sciences SAS • Registered Office: Immeuble Le Balzac, 10 
Place des Vosges, 92072 Paris La Défense Cedex, France • Registered in 
France: RCS Nanterre B 315 268 664



From:
Nicolas LIENARD <nicolas at lienard.name>
To:
<xymon at xymon.com>
Date:
10/05/2011 09:03 AM
Subject:
Re: [Xymon] Big Environment



Hello
In our company, we got ~10 000 devices split into 10 countries :
19 hobbit clusters in total. (1 hobbit cluster in each Datacenter).
we use bbproxy to report to one single hobbit cluster (central):
Wed Oct 5 07:37:25 2011
Statistics: Hosts total : 10866
Incoming messages/sec  :        478 (average last 300 seconds)

The maximum of hosts for a local hobbit cluster is 3655 :
 
Here the stat of this local cluster :
Statistics:
 Hosts total           :     3655
 Hosts with no tests   :       60
 Total test count      :     9809
 Status messages       :    10151
 Alert status msgs     :        0
 Transmissions         :      326

TCP test statistics:
 # TCP tests total     :     6214
 # HTTP tests          :     2342
 # Simple TCP tests    :     3872
 # Connection attempts :     6214
 # bytes written       :   396441
 # bytes read          : 24395876

TCP tests completed                          134011.876173 30.657820 
PING test completed (3650 hosts)             134012.458682 0.582508 
TIME TOTAL 44.107030 

Incoming messages/sec  :        173 (average last 300 seconds)

The hardware is a cluster of 2 HP DL360 G7 with 4 SSD drives in raid5 for 
/opt/xymon/data.

We chose SSD for I/O performances.


 load average: 0.74, 0.84, 0.82

We use heartbeat for VIP and DRBD for replication.

We didnt disabled hostdata but maybe we should do (we have a couple of 
scripts to clean up the hostdata, histfiles, etc).

On the local cluster
/dev/drbd0           34078720 11545129 22533591   34% /opt/xymon/data
/dev/drbd0            122G   69G   47G  60% /opt/xymon/data

-> 11 millions of files


On the central cluster
/dev/drbd0           89522176 77497570 12024606   87% /opt/hobbit/data
/dev/drbd0            673G  458G  181G  72% /opt/hobbit/data

-> 77 millions of files

it becomes high.

sometimes we experienced ASR on hobbit servers :/

Cheers
Nico



 
 
On Tue, 4 Oct 2011 10:12:57 -0400, Brand, Thomas R. wrote:
I have 7363 hosts being monitored in my environment.
Tue Oct 4 10:05:58 2011
Statistics:
 Hosts                      :  7363
 
OS: SuSE Linux Enterprise v11.0
 
I found network I/O a challenge, and had to set up a bbproxy server and 
split the ping test over two Xymon servers, each server handling about ½ 
the load. Time spent on ping tests (on each server) is about 170 seconds.  
 
 
Proxy statistics
Incoming messages        :    3332629 (48 msgs/second)
Outbound messages        :    1778658
 
 
The disk i/o with this many hosts is also difficult – at least on my 
hardware (IBM x225 server). 
I also disabled [hostdata] in hobbitlaunch.cfg – my systems could not keep 
up with the i/o.
 
I cut down on message traffic by using a the ‘ignore’ and ‘trigger’ 
capability of the logfile checker in
client-local.cfg, thereby filtering at the client; eg: 
log:/var/log/daemon.log:4096
   ignore (upsd.*: Set variable|apcsmart.*enum |dhcpd: |xinetd.*: 
|squid.*: |ntpd.*: |Connection from |logged out)
   trigger shutdown
 
 
 
 
From: xymon-bounces at xymon.com [mailto:xymon-bounces at xymon.com] On Behalf 
Of Josh Luthman
Sent: Tuesday, October 04, 2011 8:29 AM
To: Gautier Begin
Cc: xymon at xymon.com
Subject: Re: [Xymon] Big Environment
 
I saw a ~7250 host quickly scrolling through it

http://en.wikibooks.org/wiki/System_Monitoring_with_Xymon/User_Guide/The_Xymon_Users_list


Josh Luthman
Office: 937-552-2340
Direct: 937-552-2343
1100 Wayne St
Suite 1337
Troy, OH 45373

On Tue, Oct 4, 2011 at 8:24 AM, Gautier Begin <gbegin at csc.com> wrote:
Hello, 

I'm planning to use XYMON with a big amount of clients: 1000 and then 
4000. 

Is it possible ? 
Are there some tipps and limitations to know ? 

Cordialement, Regards,Mit freundlichen Grüßen,

Gautier BEGIN

Admin and Tools Team
CSC Computer Sciences Luxembourg S.A.
12D Impasse Drosbach
L-1882 Luxembourg

Global Outsourcing Service | p:+352 24 834 276 | m:+352 621 229 172 | 
gbegin at csc.com | www.csc.com


CSC • This is a PRIVATE message. If you are not the intended recipient, 
please delete without copying and kindly advise us by e-mail of the 
mistake in delivery.  NOTE: Regardless of content, this e-mail shall not 
operate to bind CSC to any order or other contract unless pursuant to 
explicit written agreement or government initiative expressly permitting 
the use of e-mail for such purpose
• 
CSC Computer Sciences SAS • Registered Office: Immeuble Le Balzac, 10 
Place des Vosges, 92072 Paris La Défense Cedex, France • Registered in 
France: RCS Nanterre B 315 268 664 
_______________________________________________
Xymon mailing list
Xymon at xymon.com
http://lists.xymon.com/mailman/listinfo/xymon
 
 
 _______________________________________________
Xymon mailing list
Xymon at xymon.com
http://lists.xymon.com/mailman/listinfo/xymon



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.xymon.com/pipermail/xymon/attachments/20111005/5163ee43/attachment.html>


More information about the Xymon mailing list