[Xymon] Encrypted Xymon reporting over SSL using stunnel
jeremy at laidman.org
Thu Mar 14 10:56:38 CET 2019
> I think there's too much going on here, but not sure where. It seems to me
> that there's already an authentication process going on using asymmetric
> keys via the ssh authentication mechanism. It would seem that an extra key
> to authenticate the client is redundant, because the client has already
> authenticated itself.
> Actually, I was suggesting the SSH style encryption which would be used
> for encryption only, not signing, in my mind that was a separate process
> which works with the contents. In my thought process, the SSH encryption is
> kind of like HTTPS, where any client can connect to the server (without
> authentication) but still ensure that the content is encrypted/protected.
Yes, that's one way to approach it. However, encryption without signing is
open to MITM attacks. So this approach would be suitable for low-risk
situations, such as detecting data corruption or truncation rather than
detecting malicious interference, I would think.
HTTPS is a weird case. It's designed to allow mutual authentication, and
does that really well. But the cost of getting a client-side authentication
artefact that works along with the HTTPS system is too high, and so people
eschew the client authentication and overlay a use username/password
system, probably because it's the way things have always been done, and the
way users are accustomed to doing it even before encryption was common for
Internet protocols (telnet, rlogin, etc).
> Within this, there would be another signature to sign the content which is
> used to confirm it came from client abc and not random unknown, nor from
> known client xyz.
My problem with this approach is that we already have a system that
naturally handles private keys very well, but we would be discarding the
client half authentication, and then jamming in a completely different
authentication scheme, requiring two different types of secrets to be
managed. This is fine when the key management burden is massively
asymmetrical as per general purpose HTTPS, but when we control both ends of
the transaction, the asymmetry is rebalanced, and so I think we should make
use of HTTPS to authenticate both nodes.
> Also, could the shared key be a problem if it gets out? Let's say I have
> 50 devices all using certs, but still with the shared key still in place,
> just in case of certificate corruption requiring a new bootstrapping
> procedure. Now, let's say someone hacks into one of the devices and
> retrieves the shared key. That key can now be used to bootstrap any new
> device to trust a rogue certificate authority, or at least to sign reports
> that are rejected by Xymon for being signed by the wrong certificate.
> Perhaps this could be used as part of a man-in-the-middle attack?
> I think the shared key is only enough to "add" a client, and only if that
> client is already listed in the xymon-hosts file. So, the value of this
> shared key is pretty low. Worst case, it could allow a rogue client to get
> a valid certificate for a host that is listed but doesn't use
> encryption/certificates, either because it is a old client that doesn't
> support it, or it is a platform that isn't supported, etc...
I agree the value is low. Which is why I think we can do without it
altogether. The window of opportunity to abuse this key is very small, and
so I think the risk isn't that much worse if there's no key at all. For
situations where the risk has to be mitigated, there would be no problem
having an administrator copying the certificate into place out-of-band or
via a trusted path such as scp.
>> Or... we could configure a shared secret in the xymon-hosts file for
>> each client (different "password" for each client) and the client simply
>> encrypts all data it returns with this shared secret. Advantage is
>> "simplicity" disadvantage is much less secure.
> Why less secure? I'm not disagreeing, just want to understand why.
> From my (limited) understanding, a lot of crypto attacks are based on
> dictionary attacks. Since pretty much 100% of our content is plain text,
> and a lot of that will be "recent dates, repeated a lot throughout the
> content", ie, log files... then it would make it easier to find out the
> "key" if you know (or can guess) the plain text that was encrypted. My
> understanding is that this is less relevant when you use some of the more
> modern encryption processes along with PKI. I guess everyone can know the
> content of the index.html that a user will get over https, so I assume that
> https encryption processes are able to overcome this issue.
Yes, it very much depends on how the encryption is done. For some systems,
knowing the plaintext and the ciphertext gives you back your key. But in
other systems, such as when the key is as long as the plaintext, this is
not the case, because you can choose a key to decrypt to any possible
original plaintext you wanted. So you can brute-force this, but there are a
bazillian possible keys that appear to give a valid plaintext and you don't
know which is the right one.
Even with crypto that allows the cracker to know when she's found the right
key, a long enough key length means that the resources and time required to
crack the code is prohibitive. It's easy to choose badly, but it's not too
difficult to choose a solid algorithm for this. You're wise to be
cautions, but this problem has been solved.
I'm not a crypto dude, but I believe the standard PBKDF functions are
intended to be robust against brute-force attacks, by key stretching as
well as by imposing a computational burden (cpu-hard) on the attacker (by
requiring thousands of hash iterations for each attempt). Alternatives such
as bcrypt and scrypt require a chain of computations plus one at the very
end (paraphrasing may be inaccurate), meaning memory can't be released
until the hashing is complete. These are memory-hard functions, and they
are intended to make things hard for ASICs and GPUs, which can crunch
through CPU-hard functions quickly.
Software such as LastPass uses PBDKF2 with 5000 iterations (user
configurable) to encrypt a user's vault, making it take a really long time
for an attacker to find the matching key.
> I mostly understand, and think I agree with most of the above. At the end
> of the day, the solution needs to be:
> a) backwards compatible, so you can still accept plain, unsigned,
> unencrypted connections from the coffee maker in the corner
Yep. Perhaps tagged as "insecure" in Xymon, to remind the sysadmin to go
do the needful for that client, and add the keys.
What keeps popping into my head is "STARTTLS" which is used by several
protocols (eg SMTP) to seek opportunistic encryption but to default to no
encryption because it's better than nothing.
> b) simple to install and maintain, and robust. If it's too hard to install
> or maintain, then people won't want to use it.
Yep, and I think backwards compatibility and opportunistic encryption is
probably the key to this. If a sysadmin can roll out encryption to just a
single node, without breaking all of the others, then she's more likely to
try it out, then phase it in. It also allows temporary disabling of the
encryption layer, to help troubleshoot with a tcpdump.
> Possibly we could run the existing 1984 listener in a wrapper which
> handles all the "encryption", and then within xymon itself, there are a
> small number of additional "types" that can be sent from client, or
> returned to the client to add the additional certificate/signing stuff.
> So, if we can easily add a command line flag that says enable "STARTSSL"
> command, that in itself would be a huge step forward. IMAP and POP and I
> think SMTP can all do this, so it should be a pretty well defined protocol,
> and should be well supported in common libraries.
Ah, I read your mind it seems. And I was also thinking about the likelihood
that we can borrow code from postfix or dovecot or whatever, which all
accept STARTTLS and then hand off to an encryption layer.
Looks like modern versions of the openssl client have a "--starttls"
option. Worst case, I reckon the Xymon client could do some file-handle
manipulation magic on execution of the openssl binary. Server side would
probably need C coding.
A separate TCP port for BB-over-TLS (ie not 1984 multiplexed with STARTTLS)
is probably easier to prototype, because we can do this with stunnel.
> True, there are many Meraki customers requesting IPv6, I'm just
> acknowledging that it is a much slower process than it seemed to be
> advertised. It originally seemed like we would all be forced to IPv6 within
> a couple of years.
Yep, that's for sure, and it really hasn't happened.
Although I do think that the crunch has come, but people are all of a
sudden realising that they can turn on NAT and free up oodles of address
space in the process. Or use IPv6 in their management networks and free up
address space for their data path. I think it's a bit like the Y2K crisis
that never happened - I think we just didn't feel it because of all the
people working behind the scenes to fix the bugs before the planes fell out
of the sky. Slightly off topic now.
On thinking about this, it occurs to me that perhaps the simplest course of
action is to provide an alternative client data reporting path to go via
https. This is already supported by the xymoncgimsg.cgi script which simply
runs within the server web software. On the client side, a wget/curl
command can be used to create a post message containing the client data.
The implementer rolling this out can choose to do authentication with
certs, either server-side or client-side or both, or neither. Or just use
http rather than https. Or to use username and password. Curl and wget also
supports proxying, which is a bonus.
What's missing from this is key (client cert) management. This could be
handled by ssh/scp, or by Xymon's built-in file deployment system, or via
the client message response, or by the client node periodically fetching
its cert from a URL based on its FQDN that's restricted by client cert or
username/password or ACL or whatever, and encrypted by TLS.
I've never used xymoncgimsg.cgi for accepting client messages. The man page
recommends using xymonproxy instead, but doesn't say why. Could this be a
performance bottleneck? I know others on this list have used it, and
perhaps can comment.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Xymon