Opened 7 months ago

Closed 6 months ago

#21357 closed defect (fixed)

potential bug: Some IPv6Exits do not add the ipv6-policy line to their descriptor

Reported by: cypherpunks Owned by:
Priority: Medium Milestone: Tor: 0.2.9.x-final
Component: Core Tor/Tor Version: Tor: 0.2.4.7-alpha
Severity: Major Keywords: ipv6 029-backport
Cc: phoul, rejo@… Actual Points: 1
Parent ID: Points: 2
Reviewer: Sponsor:

Description

An exit relay is expected to add a ipv6-policy line to its descriptor when:

{{{ IPv6Exit 1 }}
and its exit policy allows at least a single IPv6 destination.

There are 10 known cases where the exit didn't generate an ipv6-policy line.

#21355 should help with debugging.

references:
list of exit relays with IPv6 ORPort and no ipv6-policy (if IPv6Exit setting is known, it is shown at the end of the line)
https://gist.githubusercontent.com/nusenu/1534d210049fcb04919ae5a4529ea894/raw/4a5611aedb81c5bc01630433c85e8fc818c01a1d/IPv6Exit%25201%253F

https://lists.torproject.org/pipermail/tor-dev/2017-January/011860.html
https://lists.torproject.org/pipermail/tor-relays/2017-January/011806.html

Child Tickets

Attachments (3)

cowcat.torrc (4.0 KB) - added by phoul 7 months ago.
cowcat torrc
rejozenger.torrc (3.3 KB) - added by rejozenger 7 months ago.
rejozenger.torrc
snowfall.log (20.7 KB) - added by phoul 7 months ago.

Download all attachments as: .zip

Change History (20)

comment:1 Changed 7 months ago by teor

I can't seem to reproduce this issue on any exits or test exits I have.

So here are the log entries I'd like to see from relays with this bug:

Any warnings containing the word "bug"

warnings:
Exit policy '%s' and all following policies are redundant
Weird family when summarizing address policy
policy_dump_to_string ran out of room

info:
Unrecognized policy summary keyword
Impossibly long policy summary
Found bad entry in policy summary
Found no port-range entries in summary

debug:
Adding new entry
Ignored policy
Adding a reject ExitPolicy
Removing exit policy

To activate debug logging, add "Log debug /path/to/file" to your torrc.
Please grep for the relevant log messages, I don't want them all, they are aecurity sensitive.

Changed 7 months ago by phoul

Attachment: cowcat.torrc added

cowcat torrc

comment:2 Changed 7 months ago by phoul

Cc: phoul added

comment:3 Changed 7 months ago by dgoulet

Milestone: Tor: unspecified

Milestone to Unspecified. Feel free to change it if this bug is ever figure it out. Thanks!

Changed 7 months ago by rejozenger

Attachment: rejozenger.torrc added

rejozenger.torrc

comment:4 Changed 7 months ago by rejozenger

Cc: rejo@… added

In addition to the attached file, I can reach maatuska, tor26, gabelmoo and longclaw over IPv6 (verified by using nmap with -6 -p443). I do not get any error messages when tor is reloaded.

Last edited 7 months ago by rejozenger (previous) (diff)

comment:5 in reply to:  4 Changed 7 months ago by teor

Replying to rejozenger:

In addition to the attached file, I can reach maatuska, tor26, gabelmoo and longclaw over IPv6 (verified by using nmap with -6 -p443). I do not get any error messages when tor is reloaded.

Are there any messages in your log about creating your descriptor?
(Or about accounting limits?)

I can't get a descriptor via http://94.142.242.84/tor/server/authority

Changed 7 months ago by phoul

Attachment: snowfall.log added

comment:6 Changed 7 months ago by phoul

I have attached "snowfall.log" which includes debug output mentioned by Teor. 

comment:7 Changed 7 months ago by nickm

Milestone: Tor: unspecifiedTor: 0.3.0.x-final
Severity: NormalMajor

This seems like the kind of bug we should be sure to fix in 0.3.0 if we can; teor and I have been trying to figure out what causes it on IRC tonight. I think we have an idea now.

comment:8 Changed 7 months ago by teor

Actual Points: 1
Keywords: ipv6 added
Points: 2
Status: newneeds_review
Version: Tor: 0.2.4.7-alpha

See my branch bug21357-v2 on https://github.com/teor2345 for a fix to this bug.

Diagnosis

nickm found the bug in policy_summary_reject(), which is called with AF_INET when creating microdescriptors and consensuses, and AF_INET6 when creating relay descriptors. The code was never written for IPv6.

This bug was triggered when we started blocking a relay's own IPv6 address by default as part of #17027 in 0.2.8.1-alpha. I honestly don't know how any IPv6 relay works right now - the one that I operate only works because I block a larger IPv6 netblock which includes the relay's address.

Fix

The patch effectively works the same as nickm's, with a few numeric adjustments:
https://paste.debian.net/912059/

Do we think that an IPv6 /16 is a large enough block to justify a reject?
(Most providers seem to be allocated a /23, and a /16 is about the same proportion of the allocated IPv6 space as a /8 is of all IPv4 space.)

Is the scaling and saturating arithmetic code sensible?

Do we need a new consensus method for this?
I tried very hard to leave the IPv4 behaviour intact, and used non-fatal asserts for any errors.

Workarounds

Turning off ExitPolicyRejectPrivate should resolve this issue (it automatically rejects the relay's own IPv6 address), but it has security implications. Blocking the IPv6 /32 containing your relay's address also seems to work, my Exit blocks 3 /32s and functions ok.

Last edited 7 months ago by teor (previous) (diff)

comment:9 in reply to:  7 ; Changed 7 months ago by cypherpunks

Thanks for the fast bug squashing.

Replying to nickm:

This seems like the kind of bug we should be sure to fix in 0.3.0 if we can;

Please consider also backporting it to 0.2.9.x.

list of potentially (their IPv6Exit setting is unknown to me) affected IPv6 exits by major tor version:

+-------------+------------------+--------+
| tor_version | exit_probability | relays |
+-------------+------------------+--------+
| 0.2.9       |             14.5 |     50 |
| 0.3.0       |              4.4 |     18 |
| 0.2.7       |              3.0 |      8 |
| 0.2.8       |              2.1 |     17 |
| 0.2.5       |              0.4 |      7 |
| 0.2.4       |              0.1 |      1 |
+-------------+------------------+--------+

(the actual numbers can also be higher since I'm only looking at relays with IPv6 ORPort, but you can do IPv6 exiting without IPv6 ORPort)

Last edited 7 months ago by cypherpunks (previous) (diff)

comment:10 Changed 7 months ago by nickm

Keywords: 029-backport added

(I also think this has 0.2.9.x potential.)

comment:11 Changed 7 months ago by nickm

In a branch called teor_bug21357-v2_029, I've rebased this onto maint-0.2.9. Testing in chutney on a mixed network now, just to be sure.

comment:12 Changed 7 months ago by nickm

Milestone: Tor: 0.3.0.x-finalTor: 0.2.9.x-final

Okay, that works. Merged to master and marking for possible backport.

comment:13 in reply to:  9 Changed 7 months ago by teor

Replying to cypherpunks:

Thanks for the fast bug squashing.

Replying to nickm:

This seems like the kind of bug we should be sure to fix in 0.3.0 if we can;

Please consider also backporting it to 0.2.9.x.

list of potentially (their IPv6Exit setting is unknown to me) affected IPv6 exits by major tor version:

+-------------+------------------+--------+
| tor_version | exit_probability | relays |
+-------------+------------------+--------+
| 0.2.9       |             14.5 |     50 |
| 0.3.0       |              4.4 |     18 |
| 0.2.7       |              3.0 |      8 |
| 0.2.8       |              2.1 |     17 |
| 0.2.5       |              0.4 |      7 |
| 0.2.4       |              0.1 |      1 |
+-------------+------------------+--------+

(the actual numbers can also be higher since I'm only looking at relays with IPv6 ORPort, but you can do IPv6 exiting without IPv6 ORPort)

It is likely that 0.2.8 and later are affected, possible that 0.2.7 is affected, and unlikely that earlier versions are affected.

comment:14 Changed 7 months ago by teor

Status: needs_reviewneeds_information

The relay operator who originally reported this bug has upgraded to a nightly including this patch, and reports that it works:
https://lists.torproject.org/pipermail/tor-relays/2017-February/011856.html

Their relay now has an IPv6 exit policy:
https://atlas.torproject.org/#details/5E762A58B1F7FF92E791A1EA4F18695CAC6677CE

It is likely that 0.2.8 and later are affected, possible that 0.2.7 is affected, and unlikely that earlier versions are affected.

I'll clarify: earlier versions may be affected if they explicitly block networks smaller than an IPv6 /32 or larger than an IPv6 /7. The first behaviour is unintentional, the second is intentional but the wrong number of addresses for IPv6 (both are fixed in this patch).

Later versions automatically block their own IPv6 ORPort's address, so IPv6 Exits with an IPv6 ORPort are almost always affected (unless their Exit policies start by blocking a /32 to /7 containing their IPv6 address, which ends up removing the individual address as redundant).

I suggest we give it at least another week of testing before a backport.

comment:15 Changed 6 months ago by toralf

IPv6 works now like a charm at my fast exit with 0.3.0.3-alpha (at least I do have about 10 GB/hour in-traffic now over IPv6 and 20 GB/hour over IPv4, the outgoing traffic is however mostly IPv4)

Last edited 6 months ago by toralf (previous) (diff)

comment:16 in reply to:  15 Changed 6 months ago by teor

Status: needs_informationmerge_ready

Replying to toralf:

IPv6 works now like a charm at my fast exit with 0.3.0.3-alpha (at least I do have about 10 GB/hour in-traffic now over IPv6 and 20 GB/hour over IPv4, the outgoing traffic is however mostly IPv4)

From your email, when you say "in-traffic", it looks like you mean "Exit traffic".
And when you say "outgoing traffic", you mean "relay to relay traffic".
https://lists.torproject.org/pipermail/tor-relays/2017-February/011865.html

We now have 2 reports that this works, and 0 reports of breakage.
Flipping to ready-to-merge so we backport to 029 some time later this week.

comment:17 Changed 6 months ago by nickm

Resolution: fixed
Status: merge_readyclosed

Merged back to 0.2.9. Not planning to backport any further.

Note: See TracTickets for help on using tickets.