[Xymon] Scaling

cleaver at terabithia.org cleaver at terabithia.org
Thu Apr 11 22:12:40 CEST 2013


> Le jeudi 11 avril 2013 à 20:40 +0200, Olivier AUDRY a écrit :

> hello
>
> as I understand I should run xymon on a single node to improve memory
> access latency. Right ?
>
--snip--
>> numactl --hardware
>> available: 2 nodes (0-1)
>> node 0 size: 12097 MB
>> node 0 free: 594 MB
>> node 1 size: 12120 MB
>> node 1 free: 12 MB
>> node distances:
>> node   0   1
>>   0:  10  20
>>
>>
>> event I got 24 cpu. Multi core and hyperthreading. Is that correct ?

That seems odd; almost like hyperthreading is disabled? You should see
"node 0 cpus: ..." above each size. I'm running RHEL 6.4; it's possible
things have changed in that output over time if you're on a different
system.


>>
>> As I can see my two node are full. Not good at all I guess.
>>
>> My policy is the default one. Perhaps you can advice a specific policy
>> for a xymon setup ?
>>
>>  numactl --show
>> policy: default
>> preferred node: current
>> physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
>> 23
>> cpubind: 0 1
>> nodebind: 0 1
>> membind: 0 1

Generally speaking, yeah, use numactl in front of xymonlaunch to ensure
the entire process tree gets assigned to a single node. But it really
depends on your workload (can everything fit in that node?) and what else
is going on on the box. If you have something which analyzes xymondata in
a large dump, then does heavy munging on it and sends it back, it might be
better to have than on a different node than (say) the xymond_* worker
modules.

'numastat -s -z -p xymon' is your friend

The RH Performance Tuning and Resource Management guides are definitely
useful reading as well. I'm sure there's plenty of cgroup stuff that could
be helpful if/when the time came, but there are only so many hours in the
day and there's other low-hanging fruit at the moment :)

I'd definitely start with running the 'numad' service and seeing what it
does over time; it really could be all that you need.

HTH,

-jc





More information about the Xymon mailing list