In a better world, every kernel would have a /dev/urandom interface that would block if it hadn't been seeded enough. I hear that some operating systems do this already.
Unfortunately, the world is what it is, and a typical /dev/urandom implementation treats the case where its internal entropy estimator is low exactly the same as the case when its internal entropy estimator has never gotten high at all.
So we should try to protect ourselves from cases where we start up on systems with limited entropy and /dev/urandom refuses to tell us so. Here's a design sketch:
If we're generating an identity key when we haven't generated one before, or if we are starting Tor for the first time with a given DataDirectory, we should first try to read a single byte from /dev/random, and block until we can. This will ensure that the kernel RNG has (by its own lights) reached full entropy at least once, which guarantees cryptographic quality of the rest of the /dev/urandom stream.
Optionally, we can keep some RNG output in a file on disk in our data directory, and use it as an extra seed on subsequent Tor boots, regenerating it each time we start Tor. Combined with 1 above, this would protect us -- at least as well as most operating systems protect us -- from ever running our RNG in a low-entropy environment.
Optionally, we could to the trick in 1 above every time we start Tor.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items
0
Link issues together to show that they're related.
Learn more.
Trac: Description: In a better world, every kernel would have a /dev/urandom interface that would block if it hadn't been seeded enough. I hear that some operating systems do this already.
Unfortunately, the world is what it is, and a typical /dev/urandom implementation treats the case where its internal entropy estimator is low exactly the same as the case when its .
So we should try to protect ourselves from cases where we start up on systems with limited entropy and /dev/urandom refuses to tell us so. Here's a design sketch:
If we're generating an identity key when we haven't generated one before, or if we are starting Tor for the first time with a given DataDirectory, we should first try to read a single byte from /dev/random, and block until we can. This will ensure that the kernel RNG has (by its own lights) reached full entropy at least once, which guarantees cryptographic quality of the rest of the /dev/urandom stream.
Optionally, we can keep some RNG output in a file on disk in our data directory, and use it as an extra seed on subsequent Tor boots, regenerating it each time we start Tor. Combined with 1 above, this would protect us -- at least as well as most operating systems protect us -- from ever running our RNG in a low-entropy environment.
Optionally, we could to the trick in 1 above every time we start Tor.
to
In a better world, every kernel would have a /dev/urandom interface that would block if it hadn't been seeded enough. I hear that some operating systems do this already.
Unfortunately, the world is what it is, and a typical /dev/urandom implementation treats the case where its internal entropy estimator is low exactly the same as the case when its internal entropy estimator has never gotten high at all.
So we should try to protect ourselves from cases where we start up on systems with limited entropy and /dev/urandom refuses to tell us so. Here's a design sketch:
If we're generating an identity key when we haven't generated one before, or if we are starting Tor for the first time with a given DataDirectory, we should first try to read a single byte from /dev/random, and block until we can. This will ensure that the kernel RNG has (by its own lights) reached full entropy at least once, which guarantees cryptographic quality of the rest of the /dev/urandom stream.
Optionally, we can keep some RNG output in a file on disk in our data directory, and use it as an extra seed on subsequent Tor boots, regenerating it each time we start Tor. Combined with 1 above, this would protect us -- at least as well as most operating systems protect us -- from ever running our RNG in a low-entropy environment.
Optionally, we could to the trick in 1 above every time we start Tor.
Doing the blocking thing, with a log message beforehand, in the case where we're generating a long-term secret (relay identity key, hidden service identity key, especially anytime tor-gencert runs) sounds good to me.
I would be a bit nervous doing it to clients, since I don't have a good handle on what weird edge cases would result in long waits. (I guess we could argue that long waits are better than silently bad entropy, but I'd hope there's a third even better option there.)
Note that doing it on a fresh datadirectory will mean not doing it for any TBB users, since they come with a datadirectory already. Probably that's the case for many other package / bundle users too.
Keeping a bit of randomness in the datadirectory is also fine with me if we actually think there are platforms out there with crummy entropy.
I have an implementation of (1) in my branch "feature_10676". It needs review.
I'm hoping to do (2) as well, since the "whenever we create a datadir" thing won't actually work.
Keeping a bit of randomness in the datadirectory is also fine with me if we actually think there are platforms out there with crummy entropy.
Historically, the issue isn't likely to be crummy platforms, but crummy platform/installation combinations. Mainline Linux distributions on regular servers will probably not be too bad, for example... but Linuxes running on small flash-only devices will need all the help they can get.
Putting in needs_review. I still want to implement the "carry some entropy forward" part. I'd also like this to be turned for everybody until the "carry some entropy forward" part is implemented, since 0.2.5 is in alpha and running with an uninitialized RNG is indeed worse than hanging.
Looks like before we can merge this we'll need to go on a quest through different operating systems to see how their /dev/*random works. The blocking trick might turn out to be linux-only.
It appears that in recent FreeBSD at least the strategy in this patch won't hurt, since all /dev/*random access blocks if the RNG is not seeded. We'd better dig through old manpages to see whether there was a time when this wasn't so.
It appears that on (some?) OpenBSD, we need to use /dev/srandom rather than /dev/random for our blocking implementation.
It appears that in recent FreeBSD at least the strategy in this patch won't hurt, since all /dev/*random access blocks if the RNG is not seeded. We'd better dig through old manpages to see whether there was a time when this wasn't so.
Apparently FreeBSD started doing this in version 5.0.
This paper/project is relevant, especially the section "Weak entropy and the Linux RNG" and the "Defenses and Lessons":
"Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices"
Nadia Heninger, Zakir Durumeric, Eric Wustrow, J. Alex Halderman
To appear in Proceedings of the 21st USENIX Security Symposium, August 2012.
I think this relates to cypherpunks comment above and I don't have any references of my own handy, but it is not sufficient for /dev/random to think it has or has had entropy. It has been shown that the "entropy" generated at bootup by many small, diskless devices such as consumer grade "wireless routers" will tend to be similar between identical units, likely contributing to the problems noted in the factorable.net link in cyperpunks post. This is related to, but not identical with, the problems noted in the Linux man page for /dev/random leading to the recommendation to carry entropy over across boots. So somehow on these "limited entropy devices" you need to wait long enough for real entropy to be generated that will be sufficiently different from the "entropy" generated on other like devices. /dev/random will think it has entropy long before this.
I think this relates to cypherpunks comment above and I don't have any references of my own handy, but it is not sufficient for /dev/random to think it has or has had entropy.
Right; the point of this patch series is not to try for a user-space solution to all possible or historical kernel breakage. Instead, I'm trying to improve our behavior in the presence of the kinds of kernel breakage where /dev/urandom does not yet have sufficient entropy, and the kernel knows it doesn't, but returns data anyway.
Still, this isn't on-deadline for 0.2.5.
Trac: Milestone: Tor: 0.2.5.x-final to Tor: 0.2.6.x-final Status: needs_review to needs_revision