This flaw in in Tor protocol provides a possibility to resign any Hidden Service descriptor with one’s private key. Thus an adversary that does so can upload this resigned descriptor to the HS Directory and act as a frontend to hidden services whose Introduction Point data has been resigned. They can spread the .onion address of his frontend Hidden Service as a real one over the Internet (phishing) and then perform a DoS attack on chosen Hidden Services or redirect traffic to replicas that he controls and perform Man-in-the-Middle attack.
This is just a brief explanation. For more info see attached paper.
So, while this should be fixed, I don't think this is major because fixing it doesn't solve the fundamental problem of "users clicking the bad".
The basic (and IMO superior) version looks something like this:
0. Figure out, which HS you want to mount an attack on. (Eg: examplehsabcdefg.onion)
Throw CUDA cores at getting a look-alike HS address. (Eg; examplehsbcdefgh.onion)
Run your HS.
Spread your address as the real one.
Optionally DDOS the original, depends on what you are after, and how many people fall for 3.
This will work without using any protocol level trickery, and fixing the protocol level trickery doesn't prevent this. In both the "attack" presented in the ticket and the one I illustrated, users falling for the impersonation is the root problem.
As far as I am aware, there aren't good solutions to "users click on the bad" that don't involve things like the CA mafia (which is what "facebookcorewwwi.onion" does for example).
My inclination here would be to make sure that 224 actually does fix this, and then lower the priority from "major", but I will defer to nickm et al on this.
Yes, "users clicking the bad" is not going to be solved here. The problem is that attacker doesn't need to "3. Run your HS". And this "protocol trickery" is even simpler than running your own HS and reflect data to and from the original HS. A "Normal MitM" is going to be 14+1 hops from a client to the legitimate HS that introduce a huge delay that may look suspicious (especially for HS admins). The point is that we need to force attackers to use the method that you described ("normal mitm") and not the trickery.
It should be emphasized that all you need to do as an attacker is just to upload a HSDesc from time to time.
I wasn't aware of cross-certifications in 224 before, thanks Nick for this proposal. It really fixes a problem and does almost the same that my fix does ("service-key" certification).
Maybe it's a good idea to replace all public keys enclosed in [ENCRYPTED-DATA] with their certificates in 224?
Yes, "users clicking the bad" is not going to be solved here. The problem is that attacker doesn't need to "3. Run your HS". And this "protocol trickery" is even simpler than running your own HS and reflect data to and from the original HS. A "Normal MitM" is going to be 14+1 hops from a client to the legitimate HS that introduce a huge delay that may look suspicious (especially for HS admins). The point is that we need to force attackers to use the method that you described ("normal mitm") and not the trickery. It should be emphasized that all you need to do as an attacker is just to upload a HSDesc from time to time.
I'm unconvinced:
At some point, the adversary will need to run their own HS to do anything actually harmful.
An attacker can host their HS on a pwned box or something, and use 1 hop circuits to the RP and the victim HS's RP to cut out most of the latency.
Mitigation exists in the form of a self signed SSL cert if HS operators currently care about this. The lack of a trust root is irrelevant, as long as the user doesn't compound "clicking on the bad" with "accepted a SSL cert with an incorrect DN", the adversary at that point has to mount a full MITM.
I stand by my assessment, but will still defer to nickm on this.
Roger (arma) have another idea how to fix it. Roger, please describe it here.
I think the other idea was for the INTRO2 cell to specify what onion address the user thought she was going to. Then hidden services can notice when clients are visiting them but aren't using the right address.
That approach provides more defense-in-depth against future variations on this issue. I think it's complementary to Nick's cross-certification plan.
I also agree with Yawning that fixing this particular variant of the issue isn't super-urgent, since ultimately it requires tricking the user into visiting the wrong address, which is going to be bad news for the user in plenty of other ways too.
At some point, the adversary will need to run their own HS to do anything actually harmful.
Yes, sure. I look at this as the first stage of the attack when an attacker could insensibly turn the majority of the original HS users to use evil an address (and descriptors). Using trawling the attacker can determine (approximately) the portion of users that are using this evil address. If it's more than e.g. 96% (too good for spoofing, just example) the attacker perform an "active MitM" by running the evil HS and do any evil thing because they are already the "legitimate HS".
An attacker can host their HS on a pwned box or something, and use 1 hop circuits to the RP and the victim HS's RP to cut out most of the latency.
I didn't think of this scenario before, thanks for the tip! Now it doesn't seems to be conspicuously slow.
Mitigation exists in the form of a self signed SSL cert if HS operators currently care about this. The lack of a trust root is irrelevant, as long as the user doesn't compound "clicking on the bad" with "accepted a SSL cert with an incorrect DN", the adversary at that point has to mount a full MITM.
I don't consider self-signed certificates here because it provides almost zero additional security for HSes. Anyone can create it and they should be stored in the browser in order to validate something (problem with Tails).
I think the other idea was for the INTRO2 cell to specify what onion address the user thought she was going to. Then hidden services can notice when clients are visiting them but aren't using the right address.
That approach provides more defense-in-depth against future variations on this issue. I think it's complementary to Nick's cross-certification plan.
I agree. Also there is another reason for the cross-certification to be implemented - it cuts off deceived requests at the descriptor verification step. There is no need for client to build up any circuits to the HS (and slow down the network).
However your "Host:"-like verification certainly provides freedom to defend against spoofing (doesn't forcing users to defend). At this point it becomes almost equivalent to an optional cross-certificate.
Also a HS operator can track spoofing attacks on the HS with that verification.
It's more about who wants to avoid this issue more: if it's a HS operator - check how clients are coming to you, if it's a client - check the descriptor carefully before performing any request.
Good HSes should use both of course.
I also agree with Yawning that fixing this particular variant of the issue isn't super-urgent, since ultimately it requires tricking the user into visiting the wrong address, which is going to be bad news for the user in plenty of other ways too.
Yes, same here.
Resolving this as wontfix due to upcoming prop224 which solves this issue by introducing "IP-enc-key"<->"descriptor-signing-key" cross-certifications. I see no reason for implementing effectively the same structure on top of legacy TOS (pre-prop224 era). Until prop224 got implemented this should not cause any problems.
Trac: Severity: N/Ato Blocker Resolution: N/Ato wontfix Status: new to closed Reviewer: N/AtoN/A