Opened 7 years ago

Last modified 22 months ago

#5968 new enhancement

Improve onion key and TLS management

Reported by: mikeperry Owned by:
Priority: High Milestone: Tor: unspecified
Component: Core Tor/Tor Version:
Severity: Normal Keywords: tor-relay, path-bias, mike-0.2.5, key-theft
Cc: nickm, arma, rransom, dfc@…, isis Actual Points:
Parent ID: #5456 Points: 5
Reviewer: Sponsor:

Description (last modified by mikeperry)

As a best practice behavior, a relay should check that the onion key it tried to publish is actually the one it sees in the consensus in which it appears.

The onion key should also be what authenticates the TLS key (rather than the identity key, as it is now).

This would prevent some utility vectors of identity key theft, where a non-targeted upstream MITM attempts to use a relays identity to impersonate it in order to execute a tagging attack (#5456).

Child Tickets

Change History (26)

comment:1 Changed 7 years ago by mikeperry

Description: modified (diff)
Parent ID: #5563#5456

Wrong parent.

comment:2 Changed 7 years ago by nickm

Milestone: Tor: unspecified

comment:3 Changed 7 years ago by nickm

The background assumption here is apparently an attacker who can steal identity keys, but who can't/won't mess with running servers otherwise, or who is likelier to get caught if they do.

This part makes good sense, and requires no spec change:

As a best practice behavior, a relay should check that the onion key it tried to publish is actually the one it sees in the consensus in which it appears.

This part is probably not feasible:

The onion key should also be what authenticates the TLS key (rather than the identity key, as it is now).

(because onion keys are not signing keys)

comment:4 Changed 6 years ago by nickm

Keywords: tor-relay added

comment:5 Changed 6 years ago by nickm

Component: Tor RelayTor

comment:6 Changed 6 years ago by mikeperry

Milestone: Tor: unspecifiedTor: 0.2.5.x-final
Summary: Improve onion key managementImprove onion key and TLS management

What if we put a hash of the TLS cert we're using in the current microdescriptor? Clients could then check that hash during/after TLS handshake, and simply close+log any mismatches. It seems like we can check the hash after establishment without issue, so long as it is done before we try to use the connection for circuits.

Then, so long as relays verify that what they attempt to publish is what gets signed by the authorities in the consensus, we should have effectively removed the ability for identity key theft to allow TLS compromise without the additional theft of the consensus keys.

I am putting this to 0.2.5.x because it seems simple enough, and would be a huge improvement if we can authenticate TLS in this way. If no one else is going to take it, I suppose I could try.

Would we be opposed to placing this hash in the microdescriptor? Is there a better place for it that clients can still somehow see/use?

comment:7 Changed 6 years ago by nickm

If the attacker can steal the identity key for a server, they could publish their own descriptors to the authorities, containing their own TLS cert hash and IP. The server operator might notice, but the users wouldn't.

Sticking this int the _micro_descriptors seems pretty heavyweight. Maybe I don't understand the attack, though. If the attacker can steal the identity key for a server, it seems to me that they could also steal the onion key, replace the server's Tor software, trojan the server in some other way, publish descriptors with a different onion key, and so on. I don't think that "identity key compromise" is something that a server can really recover from in our design.

comment:8 Changed 6 years ago by nickm

(Incidentally, if the attacker steals the identity key but doesn't have the right TLS cert, it will fail at MITMing any connection that uses AUTHENTICATE cells from the client. So you can detect whether somebody's doing this today by making connections to a bunch of servers from an IP that isn't recognized as a server, and then trying to AUTHENTICATE to them.)

comment:9 in reply to:  7 Changed 6 years ago by mikeperry

Replying to nickm:

If the attacker can steal the identity key for a server, they could publish their own descriptors to the authorities, containing their own TLS cert hash and IP. The server operator might notice, but the users wouldn't.

What I want to remove is the ability for someone demand a guard's identity key and target specific users using interception hardware operating independently from that relay. The reason I want to prevent this specific case is because it's an adversary that can make money subverting Tor's security through the sale of such devices, which means the arms race escalates faster. Such adversaries will be incentivized to subvert/weaken legal systems to support their existence.. We should be sure to head them off before they begin to exist for this usecase, I think.

If TLS certs are also validated through the consensus somehow, we could prevent such devices from being possible without persistent compromise, which I do think is a different beast. See below.

Sticking this int the _micro_descriptors seems pretty heavyweight. Maybe I don't understand the attack, though. If the attacker can steal the identity key for a server, it seems to me that they could also steal the onion key, replace the server's Tor software, trojan the server in some other way, publish descriptors with a different onion key, and so on. I don't think that "identity key compromise" is something that a server can really recover from in our design.

In the future, secure boot and/or a readonly runtime could authenticate tor relays from many classes of such adversaries except one: some guy with a gun who demands you hand over your key.

With a frequent enough TLS key rotation, we can make such requests impractical, or at least bounded by what minimal restrictions on such repeated requests may be in place.

comment:10 Changed 6 years ago by mikeperry

Aha! You only have 3 guards, and Directory Guards means you now only need to make exactly that many TLS connections as a client.

This means we could include the TLS hash only in the full descriptor, and clients could then simply fetch the full descriptor for their guards.

That seems much less overhead, right?

comment:11 in reply to:  10 ; Changed 6 years ago by nickm

Replying to mikeperry:

Aha! You only have 3 guards, and Directory Guards means you now only need to make exactly that many TLS connections as a client.

This means we could include the TLS hash only in the full descriptor, and clients could then simply fetch the full descriptor for their guards.

Fetch from whom? If they get the descriptor from the party they assume is their guard, it could be a fake one signed by the adversary (if the adversary has compromised the guard's identity key). If they get it directly from some other party, they will be leaking who their guards are, *AND* that party could give them a one-off fake one, or an old one, or whatever. (The defense against getting an old/weird descriptor is checking its digest against the one listed in the consensus. But the microdescriptor consensus doesn't list descriptor digests.)

comment:12 in reply to:  11 Changed 6 years ago by mikeperry

Replying to nickm:

Replying to mikeperry:

Aha! You only have 3 guards, and Directory Guards means you now only need to make exactly that many TLS connections as a client.

This means we could include the TLS hash only in the full descriptor, and clients could then simply fetch the full descriptor for their guards.

Fetch from whom? If they get the descriptor from the party they assume is their guard, it could be a fake one signed by the adversary (if the adversary has compromised the guard's identity key). If they get it directly from some other party, they will be leaking who their guards are, *AND* that party could give them a one-off fake one, or an old one, or whatever. (The defense against getting an old/weird descriptor is checking its digest against the one listed in the consensus. But the microdescriptor consensus doesn't list descriptor digests.)

Oh damnit. I did not realize that clients stopped downloading the full consensus in favor of the microdesc-only one. Moreover, I also didn't know the microdescriptors in no way authenticated full descriptors. That sucks.

I guess we're back to deciding if the overhead is worth it for this or not? Looks like it comes out to about 3% overhead for including a 256bit hash for each relay in the current cached-microdescs file.

comment:13 in reply to:  8 Changed 6 years ago by mikeperry

Replying to nickm:

(Incidentally, if the attacker steals the identity key but doesn't have the right TLS cert, it will fail at MITMing any connection that uses AUTHENTICATE cells from the client. So you can detect whether somebody's doing this today by making connections to a bunch of servers from an IP that isn't recognized as a server, and then trying to AUTHENTICATE to them.)

I missed this comment initially, and I'm still a little confused here. Can you explain how this would fail in more detail? Is the idea to test as a relay for MTIM of your outgoing TLS connections?

comment:14 Changed 6 years ago by mikeperry

I think it would be wise to include a hash of the full descriptor in the microdesc or the microdesc consensus, and include a hash of the extrainfo descriptor in the descriptor.

TLS authentication aside, it may prove very useful in the future to be able to have an authentication chain from the microdesc consensus to a specific node's full descriptor set, that way we can more easily deploy this mechanism or any other mechanism that requires efficient access to extended information about a small set of nodes (like your Guards, or in the case of a relay: yourself).

Preserving that ability for the general case is definitely worth the 3% overhead, IMO.

comment:15 Changed 6 years ago by nickm

I'm not sure where you're getting 3% overhead. I just wrote a script to add an extra digest for every router to the microdesc consensus; it made the compressed consensus 27% longer.

comment:16 Changed 6 years ago by mikeperry

I did pre-compressed. I guess hashes don't compress well :/.

Even still, I really think this is a wise move.

comment:17 Changed 6 years ago by mikeperry

Keywords: path-bias added

comment:18 Changed 6 years ago by dfc

Cc: dfc@… added

comment:19 Changed 6 years ago by mikeperry

Keywords: mike-0.2.5 added

comment:20 Changed 6 years ago by mikeperry

Keywords: key-theft added

comment:21 Changed 5 years ago by nickm

Milestone: Tor: 0.2.5.x-finalTor: 0.2.???

comment:22 Changed 5 years ago by isis

Cc: isis added

comment:23 Changed 2 years ago by teor

Milestone: Tor: 0.2.???Tor: 0.3.???

Milestone renamed

comment:24 Changed 2 years ago by nickm

Keywords: tor-03-unspecified-201612 added
Milestone: Tor: 0.3.???Tor: unspecified

Finally admitting that 0.3.??? was a euphemism for Tor: unspecified all along.

comment:25 Changed 22 months ago by nickm

Keywords: tor-03-unspecified-201612 removed

Remove an old triaging keyword.

comment:26 Changed 22 months ago by nickm

Points: 5
Severity: Normal
Note: See TracTickets for help on using tickets.