Opened 13 years ago

Last modified 7 years ago

#378 closed defect (Won't implement)

Should tor use multiple connections at once?

Reported by: keybounce Owned by:
Priority: Low Milestone: post 0.2.0.x
Component: Core Tor/Tor Version: 0.1.1.26
Severity: Keywords:
Cc: keybounce, nickm Actual Points:
Parent ID: Points:
Reviewer: Sponsor:

Description

Right now, although tor maintains 2 or 3 open connections, only one is used for
all new outgoing connections.

This means that one connection will be overloaded with lots of things. If one
packet is dropped, everything backs up until that one is resent. This is the
opposite of the whole IP design (with multiple channels / non blocking traffic).

It also means that when a dropped packet causes things to be delayed, everything
closes and reopens at once, further adding to congestion and apparent slowdown.

Proposal: Have a configurable number of connections active at once, probably
defaulting to 2-4. Incoming requests go in round robin (unless TrackExitHosts is
set), so that traffic is spread over several TCP tunnels.

Benefits:

  1. Each channel has less traffic, so less congestion.
  2. If a channel drops a packet, less of the traffic has to be restarted. And,

when it does, it will be spread round-robin again, reducing the congestion level
even more.

  1. Potentialy, tor can track how heavy a circuit can be loaded before it is

"full", and automatically open new circuits.

Potential disadvantages:

  1. Does it make it easier for an attacker to see some of your traffic? Before,

an attacker either saw none, or (rarely) all; now an attacker see none, or
(more commonly than before) some.

[Automatically added by flyspray2trac: Operating System: All]

Child Tickets

Change History (7)

comment:1 Changed 13 years ago by keybounce

Additional notes:

http://sites.inka.de/~W1011/devel/tcp-tcp.html has an explanation of why TCP
over TCP is a bad idea -- in a nutshell, the two streams can wind up with
different timeout/retransmission timers, and then any dropped packet generates
massive retry traffic.

I think this is part of the problem -- once a tor channel slows down its
retransmission rate, and then loses a packet, things become a crawl. When
a tor channel has a LOT of upper TCP connections (such as a lot of web page
requests), this makes it even worse. Then, when a stream does "die", all of
these requests -- which overloaded one channel -- go into the same new channel.

comment:2 Changed 13 years ago by nickm

Tor isn't TCP-over-TCP: we tunnel the data inside of TCP streams, not the raw IP packets.
Tor streams don't have timeout _or_ retransmission timers: a dropped packet in a TLS link
stalls a bunch of connections, but only the TLS link needs to retry.

I agree that multiplexing streams over more circuits would be a good idea, as would switching
to a UDP transport. The former is kinda easy if we figure out the right thing to do; the latter
would involve quite a lot of design work, but is eventually the right way to go.

comment:3 Changed 13 years ago by keybounce

So what would be involved in doing some programming work to try to fix this?

The documentation on tor seemed to indicate that more important than
volunteering time to do the coding was worrying about the security
concerns. That part is well past me -- I don't have enough knowledge
and background in security research. However, a "simple" case of
"Maintain N open channels, spread outgoing requests over these in
sequential order" is easy enough to code.

Should the load balancing be straight round robin? Or does tracking
total (recent) volume of each circuit make sense? And how should the
"N" be changed with time and volume? My first thought is that when a
circuit fails, it's a sign of overload, and two new circuits should be
added. Equally, when traffic slows down, and circuits go idle, they
close (there's already MaxCircuitDirtiness -- is reusing this sufficient?)
How else should N increase with load?

Should any effort be made to keep the end node the same (aside from TrackHostExits)?
I've noticed that when the "end node" is constrained, the circuit
path goes to length 4 (although I haven't checked the latest version
of Tor on this).

comment:4 Changed 12 years ago by nickm

Interesting issues (sorry about the delay; I wasn't on the notification list for this bug).

To get the design work done, check out the proposal process in

http://tor.eff.org/svn/trunk/doc/spec/proposals/001-process.txt

comment:5 Changed 11 years ago by nickm

18 months later, it doesn't look like we're going to see a forthcoming design here. These is, however, some pretty
promising work on switching to UDP, including two competing implementations and a PhD thesis. I think that's the
direction we're likelier to move towards in the long term. Closing this bug as "won't implement", since any
internal effort from the Tor people to solve these problems will probably go towards UDP transport instead. If
anybody wants to advance this kind of approach instead, please start a thread on the or-dev mailing list.

comment:6 Changed 11 years ago by nickm

flyspray2trac: bug closed.

comment:7 Changed 7 years ago by nickm

Component: Tor ClientTor
Note: See TracTickets for help on using tickets.