[Xymon] [Xymon-developer] Xymon swarm proposal

J.C. Cleaver cleaver at terabithia.org
Sat Nov 28 19:04:53 CET 2015


On Sat, November 28, 2015 3:37 am, Henrik Størner wrote:
> Hi,
>
> the recent talk on xymon-developer about rewriting xymonproxy to support
> TLS, IPv6 and other good stuff made me think about other ways of scaling
> Xymon across large installations.
>
> Which led me to the idea of having multiple independent Xymon servers -
> a swarm, because no one Xymon server depends on the others, but they can
> cooperate.
>
> Simply put, you have a number of independent Xymon installations. Each
> of them handles a group of servers - it could be one in each of your
> datacentres, one for each organisational unit, one in each network
> segment, or just a because you have such a large installation that a
> single Xymon server cannot cope with the load (and that would be a
> really big installation, judging by the numbers I hear). This all works
> just like the Xymon you have today.
>
> The only thing that is needed to have all of these independent Xymon
> servers show up as a single (virtual) Xymon installation is to have the
> Xymon webpages - generated by xymongen - to display a set of webpages
> showing the status of all of the Xymon servers in the swarm. When you
> click on the detailed status log, you are transparently sent to the
> Xymon server that holds the data about that server (the URL points to
> the Xymon server handling the particular server you want to check on).
>
> The nice thing about this is that I think it can be implemented fairly
> easily, i.e. without having to change anything fundamental in the way
> the various Xymon programs work. Which means it will also be easy to
> adapt into an existing Xymon installation, and with a good chance of not
> introducing difficult-to-troubleshoot bugs (difficult because bugs
> involving remote systems are always a headache to reproduce).
>
> There are of course a few nitty-gritty details, e.g. "Find host" really
> should be able to search across all of the servers in the swarm. But
> those cases are rather few and fairly isolated to not be too much of a
> headache.
>
>
>         Multiple independent Xymon servers
>
>
>   * Each site runs just like today.
>   * A new sites.cfg file lists the other sites (just a site ID and how
>     to contact xymond there)
>   * Each site UI (the static webpages from xymongen) merges data from
>     all sites
>
>
>         Advantages
>
>   * More resilient - if one site dies, the others will remain operational
>   * Less cross-site traffic (local data remain local except when needed)
>   * Less load on each site (updates only go to one Xymon server)
>   * Horizontally scalable
>
>
>         Limitations
>
>
>   * Hostnames must be unique globally. Probably not a significant problem.
>   * Functions that fetch data directly from disk-files cannot be
>     cross-site (rrd-files, history-logs), unless you can retrieve the
>     data via a network request. In a standard Xymon installation that
>     would be:
>       o Availability reports
>       o Event log reports (but see below)
>       o Multi-host graphs, unless all of the hosts are local
>   * Alerts are always handled locally
>
>
>         xymongen
>
>
>   * hosts.cfg file for the page layout must be merged from all sites.
>     Can be a simple append-one-after-the-other (built-in) or perhaps
>     allow foran externally generated hosts.cfg - if you want to have
>     servers from multiple locations on one page.
>   * How do we handle non-unique pagenames? Transparently prefix them
>     with the remote site-ID?
>   * xymondboard data is fetched from multiple sites and combined
>     (appended) - handled in sendmessage()
>   * cgi-URL's are generated with a prefix of /SITE/ - no change
>     otherwise. The local webserver then proxies /SITE/ requests to the
>     remote site.
>   * Should there be both a local and a global "all non-green" page?
>     Maybe even a full set of local and global webpages? That would be
>     easy by running xymongen twice - one for the local and one for the
>     global set of pages.
>
>
>         sendmessage() function
>
>
>   * No changes for sending status- or data-updates (status, combo,
>     extcombo, client, data, modify)
>   * Option to fetch data from multiple sites. This is already in place
>     for sending to multiple Xymon servers, so we just need to combine
>     the output response from multiple sites.
>   * When processing host-related requests, we learn where the host is
>     located. Cache this for use by various tools. Must be disk-based
>     (e.g. SQLite file) so it can be shared.
>
>
>         xymond
>
>   * hostinfo requests should only answer for the local hosts. No need to
>     consult the SQLite cache - no changes.
>
>
>         CGI programs
>
>   * "Find host" must be cross-site
>   * Ack-alert: Suggest making it local-only. Since alerts are only
>     generated locally, it makes sense to also only ack the local alerts.
>   * Enable/disable only on the local site? Use the "info" page
>     enable/disable (automatically local). Global enable/disable needs
>     some more looking into.
>   * Critical systems - would probably be nice to be able to do both a
>     local and a global version.
>   * Eventlog - would be nice to have both local and global, even though
>     that means fetching a (large) remote logfile. Will probably require
>     a new "eventlog" CGI interface for retrieving a remote logfile. It
>     is probably not something we want to do on every
>     critical-systems/all-nongreen webpage update. So those could keep
>     the local eventlog display (as-is), and then the eventlog CGI could
>     have the option of combining logs from all sites (or maybe a
>     selection of sites).
>
>
>         xymon commands
>
> _Commands re. specific hosts_
> First check via hostinfo cache (see below) if we know where the host is
> (performance optimization). If not then simply broadcast the message to
> all sites and combining any data that is returned - there will only be
> data from one server.
>
>   * notify
>   * disable
>   * enable
>   * query
>   * xymondlog, xymondxlog, clientlog
>   * hostinfo - sendmessage() will fetch the data for us, whether from
>     the local xymond or from the SQLite cache.
>
> _Commands that collect data on multiple hosts_
>
>   * xymondboard, xymondxboard - option from user whether to fetch local
>     or global info. Handled in sendmessage()
>
> _Command that only work locally_
>
>   * ghostlist
>   * drop
>   * rename
>   * schedule. If done via web i/f it becomes automatically transparent,
>     but not for scripts. Probably only used for
>     disable/enable/drop/rename so makes most sense to do it locally.
>     Doing global would have to parse the message to detect which host it
>     is about.
>
>
> Comments are very welcome.
>
> Regards,
> Henrik
>



Hi,

I think the proposal has a lot of merit.

There are a few bits I feel that might be able to be solved rather easily
with the xymond_locator tech you'd put in a few years ago, but it also
brings up some interesting philosophical questions of how much metadata
gets distributed throughout and by what mechanism.


Some orgs might want almost the exact inverse of swarming (sharding)
whereby xymond is fully replicated for HA and reporting purposes, a la
MySQL. For others, sharding solves network and performance issues and a
unified query for CGIs only, or CLI tools only, makes perfect sense. Still
others might want all xymond's to have a full hosts.cfg reference and
perhaps some basic state data (eg, status metadata, including line1), but
wouldn't want/need the full status going over the wire... or could do with
stachg updates only, with other metadata resync'd on intervals.



For xymond_locator, IIRC histlogs, hostdata, and RRD are working now. The
only things not were per-hostsvc event history (still done with file
reads) and historical hostdata snapshots. With the sites/swarm proposal,
we could simply be pre-specifying what's happening where instead of having
various services checking in for assignment. (Can they check in with
multiple locators now?) You could probably do that now by simply writing
out a static locator.hosts.chk file with all of the 'sticky' fields set
and -- presto... unified dispatch server!

(Of course, given that xymongen is interval-based, you could almost read
that in directly... Seems like a global hosts.cfg and a global
locator.hosts do make things easier.)


If the incoming reports never make it to the distinct xymond's, that's
fine, but xymond_locator's communicating with each other could agree on
the "swarm state" and give you some de facto cluster management outside of
grabbing hostinfo from the xymond's (or really making xymond do anything
else that relies hitting the disk or network).


Speaking of eventlog.cgi, this feels like an opportunity to consider
separating the CGI from the query method as well. As you say, the current
storage method really forces a "everyone respond to this query and I'll
re-assemble into the response for the user" sort of method.

For larger sites, event reporting is really important; so much so that
sending it off to a central DB makes a lot of sense, and that just takes
writing a pretty trivial stachg channel listener to do. The problem has
been that there's no easy CGI for querying that data unless you write your
own too. (Also missing has been a pure clientlog snapshot browser,
distinct from specific status-changing events.)


When you start thinking in terms of a "reporting server" where the slow
stuff happens in response to arbitrary queries, it almost makes sense to
make all of these new message types something that *any* xymond server can
handle -- but shouldn't!

(That is, "Once you start scaling, make a replication/query server and
point general queries to that instead so that live processing remains
unimpeded.")


As far as what it would take to make this happen? One way to encourage
experimentation would be to provide an arbitrary two-way message mechanism
for xymond. The 'usermsg ID' channel works but is one-way, of course...
Perhaps we could create an 'extmsg ID' format, with a locally-configured
ID->TCP:port dispatch in xymonserver.cfg which xymond proxies the
communication off to (non-blockingly, of course), sending anything
received back back to the original sender. People could create all sorts
of custom data backends while still going through a single xymon query
mechanism.



As for everything else, my only other concern was with integrating the
gathering at the sendmessage() vs the application and/or the server level.

It puts the logic for handling issues where one or more of the servers is
down, unreachable, or slow at a very low level, and there might be cause
for having more finely-tuned (or administrator-set) control there over
retries, timeouts, any vs. all's, etc
 for xymongen vs svcstatus vs a
xymonproxy type thing.



Regards,

-jc





More information about the Xymon mailing list