ARGON cryptography (by )

Ok, so we now have some security rules about which nodes may mirror which entities onto which disks, and which encryption algorithm will be needed for any given communication across the network. And although every entity, every network-accessible interface, and every individual request has its own individually specified protection requirements, general advances in brute-forcing hardware, weaknesses being found in individual cryptosystems, and the arrival of new cryptosystems can be handled on a cluster-wide basis by a security administrator.

How do we actually implement the encryption for network communications? And how do we authenticate clients and servers to each other?

Within clusters, nodes can authenticate to each other using shared secrets. In order to prevent lightly-trustworty nodes with low MT(N) from masquerading as highly trusted nodes, it would seem that there should be a shared secret for each unique value of MT(N) used within the cluster; every node gets a mirror of the shared secret for their MT(N) level and all beneath. Only the most trusted nodes in the cluster have all the shared secrets. The most trusted node in the system should be responsible for generating new shared secrets and distributing them periodically, or on demand when a shared secret is considered potentially breached. Applied Cryptography details algorithms for two untrusting nodes connected by an untrusted network to prove to each other that they have a secret in common. These same shared secrets can also be used to seed session key generation algorithms, which are used to create keys for the actual cryptographic algorithms required by the invocations.

Between clusters, it's more interesting. There can be no shared secrets on a scaleable Internet. As such, we have to turn either to public cryptography, or trusted third parties.

I don't like trusted thirt parties. There are no third parties everyone on the Internet would agree to trust. TTP schemes have their place on LANs, like Kerberos.

So it has to be public key. But then how do we find the public key of the remote party in a communication, without falling for another trusted third party - either a certificating authority, as in the SSL model, or a key server, as in the PGP model?

I decided to go for a rather novel approach and make the public key be considered part of the identity of a cluster. Cluster IDs are already not required to be human readable; they contain a list of one or more physical network addresses (eg, IP addresses) of nodes in the cluster, and are automatically updated when the cluster adds or loses nodes. So little harm is done by putting a few large integers in there.

Every node in the cluster has a mirror of the cluster's private key. And a history of past public and private key pairs, so if a client contacts with an outdated cluster ID, then the latest public key and node list can be returned to the client, signed with the private key matching the public key that client has.

As you can immediately see, there is then no concept of the maximum trust level of a node when communicating between clusters. I didn't see a reasonable way of enforcing it. If you are worried about your cluster's external interfaces being handled by nodes within your cluster that have been exploited, consider that only nodes configured to be accepting requests from outside the cluster (eg, neither your desktop PCs nor your backend servers) will be listed in the cluster ID as points of contact for the cluster, and the fact that if you do need a broad range of different trust levels in your publicly accessible nodes, you could just split them into two seperate clusters, each with their own private key, so they cannot pretend to be each other.

A MERCURY handle contains within it the cluster ID, the serial number of the handling entity within the cluster, the serial number of the handler within the entity, the optional persona field described on the MERCURY pages, and a copy of the server's minimum security requirements. This copy may be out of date, which is handled as described below.

The security requirement consists of the minimum security protection level required, as mentioned above, as well as a flag that is set if the server will only accept requests which are signed by the client's cluster and have the client's cluster ID and entity serial number attached, to prove the identity of the client. The client's cluster is considered trusted not to lie about which entity within it is responsible for the communication, since if the cluster is violated, any entity within it could then trivially be violated anyway.

If the security protection level is non-zero then the server ends up having to authenticate itself to the client, since the generation of a session key is done by the client choosing a session key, and encrypting it with the server's cluster's public key so only the correct server can decode it.

If a client has an outdated copy of the handle, then either the security requirement will have increased in some way, or decreased. If the required security has decreased then the client will merely use a tighter encryption algorithm than necessary, or unnecessarily sign the request, which is not a problem. If the levels required by the server have increased, then the server may reject the request with a message detailing its current security requirements for accessing that handle. The client can then update their local copy of the handle with the new requirements, and try again.

Of course, if the client itself feels it needs a higher security level than that requested in the handle issued by the server - R(C) > R(I) - then it will use the security level it desires, rather than what the server expects. The server will not reject a request for being too heavily encrypted.

It has also occured to me that in order to help prevent DOS attacks and spamming, it would be interesting to allow servers to require HashCash attached to requests. A handle may also specify a minimum number of bits of hash cash required for a request to be honoured, and if insufficient are provided then it again responds with a rejection message detailing the problem so the client can stand corrected and try again.

Pages: 1 2

6 Comments

  • By Ben, Mon 9th Aug 2004 @ 10:01 am

    "... if a client contacts with an outdated cluster ID, then the latest public key and node list can be returned to the client, signed with the private key matching the public key that client has."

    As you state it, there is no way to recover gracefully from a key compromise. You have to update all cluster IDs on every potential client. In addition, key revocation would be difficult -- you can't just let the server say "key revoked, use another".

    In effect, private keys need to be kept secret for ever, otherwise you subject yourself to man in the middle attacks. This is tricky, if you want to keep the public keys on servers connected to the outside world, which you need to do.

    Maybe you could consider a hierarchy of keys, with a well known key signing key. This is only used to sign cluster keys, and is kept offline and very secure. This isn't too bad, as you only need to use it when you update cluster keys. So you would distribute the master key to all clients, and let the server update them on the latest cluster key whenever required. Perhaps the master key should be the one in the cluster ID?

    An interesting question to ask is how would the scheme cope if you wanted to update the keys every month?

  • By alaric, Mon 9th Aug 2004 @ 11:07 am

    Good point, I must admit I rather skimmed over the public key change management in my head while thinking about it.

    One needs the cluster public key - that nodes have the private counterpart of - represented in the cluster ID, since nodes need to actively sign things with it to prove their identity; but since that key is, as you say, compromisable, I think there's no choice but to have at least a hash of the top secret cluster master key present in the ID as well. Then to send a client a new cluster public key, the master public key could be included along with the signed cluster public key, and the master public key can be checked against the hash.

    How does that sound?

  • By Ben, Mon 9th Aug 2004 @ 2:05 pm

    It sounds unnecessarily complex, but would work. But you now have two elements, the keys and the hash.

    Why not just have the master public key, and get the server to return a key which is signed by that key? You can always cache it for efficiency, in your scheme you'll need to do that kind of thing anyway.

    I feel slightly dubious about using hashes of keys, as it reduces the effective size a quite considerably, and a lot rests on the security of the hash algorithm. These haven't been studied as much as crypto algorithms.

    But having said all that, you've still got the revocation and expiry problems. I can see that adding a key to the ID neatens up a lot of thing, but it's flying in the face of established key management principles.

  • By alaric, Mon 9th Aug 2004 @ 2:37 pm

    The problem is, established key management principles have yet to be successfully scaled to the Internet. SSL uses a horrible system of trusted third parties, while PGP uses a network of introducers that would prevent anyone from accessing a resource they hadn't had personally referred to them by somebody. Established key management principles tend to have to deal with the current structure of the Internet, where resources are identified by short human readable strings such as URLs or email addresses, with no convenient way of attaching a public key.

    There is a mention in Applied Cryptography somewhere of systems that use the address of a resource as a key, or embed a key in the address, but I can't find it now :-/ It's little more than a footnote anyway.

    "Why not just have the master public key, and get the server to return a key which is signed by that key?"

    As in, before communicating with any server, you first do a quick key-getting exchange with it? If that is what you mean, then that's sort of what I'm proposing anyway - the client tries, using the key it has already given to it in the address. If it's the wrong key, the server notes this and issues it a new public key, signed by the master key. The client updates its copy of the address. Indeed, should a client (somehow) come accross a cluster ID with no public key in, it could just ping the cluster with a null key, and then be given the most up to date key. The lazy update protocol used for this, as for when the list of IPs of nodes in a cluster changes, works by the cluster ID having a serial number that is incremented when the node list or the public key changes. This serial number is sent in every request from the client, and the server responds with up to date information if it needs to.

    Storing a hash of the master public key rather than the master public key itself is just intended as a measure to keep cluster IDs down in size! 2048 bits = 256 bytes, and UDP packets can rarely be larger than 1500 bytes. I'd like an ARGON directory lookup protocol to be able to return an interface handle (cluster node IP list + public key + serial number + entity number + interface number + persona field + master public key or hash thereof) in a single UDP packet, like DNS does, to avoid it having to maintain costly session state for such a lightweight service.

    Revocation isn't really doable without some kind of trusted place to store revocations... I'm dead set against trusted third parties for this stuff; humanity needs a way to do server authentication without them. Maybe a key expiry time needs to live in the cluster ID, too, and the cost of an exploited key will be that people who can spoof your nodes on the Internet can pretend to be them until the key expires.

    If they could steal the private key from your nodes, though, maybe they could just directly infiltrate them anyway; perhaps effort is better spent making the nodes hard to infiltrate than making horrible compromises to shut the door once the horse has bolted 😉

  • By Ben, Mon 9th Aug 2004 @ 6:44 pm

    Crypto is about planning for the worst case. Having no revocation seems a bit troublesome.

    Hashes... yes, it will reduce the size of the UDP packet, but at the expense of a potential weakness. You don't quite know what will happen, and message digests are almost an unexplored frontier of crypto.

    You can attach a key to a URL by putting a record in the DNS. As long as you trust DNS, that is. There are no easy answers to key management. It's often said that in crypto, the algorithms are easy, but it's the key management on which your scheme rests.

  • By alaric, Mon 9th Aug 2004 @ 7:15 pm

    Crypto is about planning for the worst case. Having no revocation seems a bit troublesome.

    What supports revocation these days? I know that PGP keyservers can contain revocation certs, but you'll only see them if you check the keyserver - and most people only seem to contact the keyserver to fetch a key when they first want to communicate with somebody they've not got a key for.

    I don't think TLS does revocation in practice, either; I've seen mention of revocation certificates in openssl, but I've never seen a mention of how they are broadcast to the world when a TLS identity cert is compromised!

    How could we do revocation? By specifying, in the cluster ID, the contact details of a third party that should be checked with to see if the key you currently hold is outdated? How do we authenticate the third party? We still have the same problem...

    Hashes... yes, it will reduce the size of the UDP packet, but at the expense of a potential weakness. You don't quite know what will happen, and message digests are almost an unexplored frontier of crypto.

    One can base hash functions on well-studied crypto; use a trusted block cypher as your hash function. Run it in a feedback mode across the entire block, then take the final output block as your hash, and many other such tricks.

    Bear in mind that all digital signing systems I've seen used in anger have relied on a hash function! Are you sure they're that untrustworthy?

    You can attach a key to a URL by putting a record in the DNS. As long as you trust DNS, that is.

    If you can trust DNS, then you might as well trust everything anyway 🙂

    There are no easy answers to key management. It's often said that in crypto, the algorithms are easy, but it's the key management on which your scheme rests.

    Yep. I don't think there's a perfect solution anywhere - it's all tradeoffs. I plan to get rid of having to rely on third parties (which are a centralised high-risk attractive target for the forces of evil, and are performance bottlenecks, and there isn't one everyone will trust), at the cost of revocation not happening.

    For a start, it won't be all that easy to leak the cluster private key. The system can guard it very jealously; there's no need for even an administrator to be allowed to read it. Perhaps nodes below a certain level of physical-security-trust might only keep it in RAM, and have to ask more trusted nodes to send it a copy of the key every time it boots up. The key only needs to be issued at all to nodes that are actually publicly visible. The page of RAM with it in can be locked so it's not swapped out. Indeed, the private key and the code that does encryption and signing with it might be put into a seperate address space to the rest of the system, and communicate via UNIX sockets.

    Sites where security really matters are welcome to have military-style seperate crypto hardware attached to the server nodes, with the key buried inside in tamper-proof storage with a self destruct handle on the front and an armed guard.

    Secondly, is stealing the cluster private key likely to be the weakest point? Most of the things you'd have to do to steal it are harder than just taking control of one or more nodes, anyway. The only times I've heard of TLS certificates needing revocation came down to social engineering attacks anyway ("Hi, Verisign, it's Bill Gates here... yeah, we need a new key pair, CN=*.microsoft.com... bill to the usual address, OK? Oh, can you email it to my, uh, home email address? billg20392@hotmail.com if you could?"). The way I plan to handle it, the only interaction the security administrator would have with the key management system is to provide the disk with the master private key on it (or, rather, the disk containing a string of bits the size of the master private key which, when XORed with another such string of bits kept in the cluster, produces the master private key, or one of those any-N-of-the-M-keys-will-do thingies) to cause a new cluster key to be generated. One would have to try hard to extract the actual cluster private key, by using low-level debugging systems, thus reducing the set of people who know how to do it that could be socially engineered, and reducing the chance of accidentally leaving a copy of it anywhere (no matter how hard you try).

Other Links to this Post

RSS feed for comments on this post. TrackBack URI

Leave a comment

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales