Opened 4 years ago

Closed 3 years ago

Last modified 3 years ago

#15951 closed defect (wontfix)

FairPretender: Pretend as any hidden service in passive mode

Reported by: twim Owned by: twim
Priority: Medium Milestone:
Component: Core Tor/Tor Version:
Severity: Blocker Keywords: tor, hs, descriptor, tor-hs
Cc: desnacked@…, donncha@…, arma, dgoulet Actual Points:
Parent ID: Points:
Reviewer: Sponsor:

Description

This flaw in in Tor protocol provides a possibility to resign any Hidden Service descriptor with one’s private key. Thus an adversary that does so can upload this resigned descriptor to the HS Directory and act as a frontend to hidden services whose Introduction Point data has been resigned. They can spread the .onion address of his frontend Hidden Service as a real one over the Internet (phishing) and then perform a DoS attack on chosen Hidden Services or redirect traffic to replicas that he controls and perform Man-in-the-Middle attack.

This is just a brief explanation. For more info see attached paper.

I have idea how to fix this by introducing "backward permanent key signature"
https://github.com/mark-in/tor/tree/backward-permkey-signature
https://github.com/mark-in/torspec/tree/backward-permkey-signature

It would be great to hear more ideas from you how to fix it better.

Child Tickets

Attachments (1)

fairpretender.pdf (66.1 KB) - added by twim 4 years ago.
explanation paper

Download all attachments as: .zip

Change History (13)

Changed 4 years ago by twim

Attachment: fairpretender.pdf added

explanation paper

comment:1 Changed 4 years ago by twim

Roger (arma) have another idea how to fix it. Roger, please describe it here.

comment:2 Changed 4 years ago by nickm

For what it's worth, the cross-certifications in proposal 224 should fix this for next-gen services.

comment:3 Changed 4 years ago by yawning

Keywords: tor-hs added

So, while this should be fixed, I don't think this is major because fixing it doesn't solve the fundamental problem of "users clicking the bad".

The basic (and IMO superior) version looks something like this:

  1. Figure out, which HS you want to mount an attack on. (Eg: examplehsabcdefg.onion)
  2. Throw CUDA cores at getting a look-alike HS address. (Eg; examplehsbcdefgh.onion)
  3. Run your HS.
  4. Spread your address as the real one.
  5. Optionally DDOS the original, depends on what you are after, and how many people fall for 3.

This will work without using any protocol level trickery, and fixing the protocol level trickery doesn't prevent this. In both the "attack" presented in the ticket and the one I illustrated, users falling for the impersonation is the root problem.

As far as I am aware, there aren't good solutions to "users click on the bad" that don't involve things like the CA mafia (which is what "facebookcorewwwi.onion" does for example).

My inclination here would be to make sure that 224 actually does fix this, and then lower the priority from "major", but I will defer to nickm et al on this.

comment:4 Changed 4 years ago by twim

Yes, "users clicking the bad" is not going to be solved here. The problem is that attacker doesn't need to "3. Run your HS". And this "protocol trickery" is even simpler than running your own HS and reflect data to and from the original HS. A "Normal MitM" is going to be 14+1 hops from a client to the legitimate HS that introduce a huge delay that may look suspicious (especially for HS admins). The point is that we need to force attackers to use the method that you described ("normal mitm") and not the trickery.
It should be emphasized that all you need to do as an attacker is just to upload a HSDesc from time to time.

I wasn't aware of cross-certifications in 224 before, thanks Nick for this proposal. It really fixes a problem and does almost the same that my fix does ("service-key" certification).
Maybe it's a good idea to replace all public keys enclosed in [ENCRYPTED-DATA] with their certificates in 224?

comment:5 in reply to:  4 ; Changed 4 years ago by yawning

Replying to twim:

Yes, "users clicking the bad" is not going to be solved here. The problem is that attacker doesn't need to "3. Run your HS". And this "protocol trickery" is even simpler than running your own HS and reflect data to and from the original HS. A "Normal MitM" is going to be 14+1 hops from a client to the legitimate HS that introduce a huge delay that may look suspicious (especially for HS admins). The point is that we need to force attackers to use the method that you described ("normal mitm") and not the trickery. It should be emphasized that all you need to do as an attacker is just to upload a HSDesc from time to time.

I'm unconvinced:

  • At some point, the adversary will need to run their own HS to do anything actually harmful.
  • An attacker can host their HS on a pwned box or something, and use 1 hop circuits to the RP and the victim HS's RP to cut out most of the latency.
  • Mitigation exists in the form of a self signed SSL cert if HS operators currently care about this. The lack of a trust root is irrelevant, as long as the user doesn't compound "clicking on the bad" with "accepted a SSL cert with an incorrect DN", the adversary at that point has to mount a full MITM.

I stand by my assessment, but will still defer to nickm on this.

comment:6 in reply to:  1 ; Changed 4 years ago by arma

Replying to twim:

Roger (arma) have another idea how to fix it. Roger, please describe it here.

I think the other idea was for the INTRO2 cell to specify what onion address the user thought she was going to. Then hidden services can notice when clients are visiting them but aren't using the right address.

That approach provides more defense-in-depth against future variations on this issue. I think it's complementary to Nick's cross-certification plan.

I also agree with Yawning that fixing this particular variant of the issue isn't super-urgent, since ultimately it requires tricking the user into visiting the wrong address, which is going to be bad news for the user in plenty of other ways too.

comment:7 in reply to:  5 Changed 4 years ago by twim

Replying to yawning:

I'm unconvinced:

  • At some point, the adversary will need to run their own HS to do anything actually harmful.

Yes, sure. I look at this as the first stage of the attack when an attacker could insensibly turn the majority of the original HS users to use evil an address (and descriptors). Using trawling the attacker can determine (approximately) the portion of users that are using this evil address. If it's more than e.g. 96% (too good for spoofing, just example) the attacker perform an "active MitM" by running the evil HS and do any evil thing because they are already the "legitimate HS".

  • An attacker can host their HS on a pwned box or something, and use 1 hop circuits to the RP and the victim HS's RP to cut out most of the latency.

I didn't think of this scenario before, thanks for the tip! Now it doesn't seems to be conspicuously slow.

  • Mitigation exists in the form of a self signed SSL cert if HS operators currently care about this. The lack of a trust root is irrelevant, as long as the user doesn't compound "clicking on the bad" with "accepted a SSL cert with an incorrect DN", the adversary at that point has to mount a full MITM.

I don't consider self-signed certificates here because it provides almost zero additional security for HSes. Anyone can create it and they should be stored in the browser in order to validate something (problem with Tails).

comment:8 in reply to:  6 Changed 4 years ago by twim

Priority: majornormal

Replying to arma:

I think the other idea was for the INTRO2 cell to specify what onion address the user thought she was going to. Then hidden services can notice when clients are visiting them but aren't using the right address.

That approach provides more defense-in-depth against future variations on this issue. I think it's complementary to Nick's cross-certification plan.

I agree. Also there is another reason for the cross-certification to be implemented - it cuts off deceived requests at the descriptor verification step. There is no need for client to build up any circuits to the HS (and slow down the network).

However your "Host:"-like verification certainly provides freedom to defend against spoofing (doesn't forcing users to defend). At this point it becomes almost equivalent to an optional cross-certificate.
Also a HS operator can track spoofing attacks on the HS with that verification.

It's more about who wants to avoid this issue more: if it's a HS operator - check how clients are coming to you, if it's a client - check the descriptor carefully before performing any request.

Good HSes should use both of course.

I also agree with Yawning that fixing this particular variant of the issue isn't super-urgent, since ultimately it requires tricking the user into visiting the wrong address, which is going to be bad news for the user in plenty of other ways too.

Yes, same here.

comment:9 Changed 4 years ago by teor

Milestone: Tor: 0.2.???

comment:10 Changed 3 years ago by twim

Resolution: wontfix
Severity: Blocker
Status: newclosed

Resolving this as wontfix due to upcoming prop224 which solves this issue by introducing "IP-enc-key"<->"descriptor-signing-key" cross-certifications. I see no reason for implementing effectively the same structure on top of legacy TOS (pre-prop224 era). Until prop224 got implemented this should not cause any problems.

comment:11 Changed 3 years ago by teor

Milestone: Tor: 0.2.???Tor: 0.3.???

Milestone renamed

comment:12 Changed 3 years ago by nickm

Milestone: Tor: 0.3.???

Milestone deleted

Note: See TracTickets for help on using tickets.