Opened 8 years ago

Closed 8 years ago

Last modified 6 years ago

#3564 closed enhancement (implemented)

Implement proposal 181 (optimistic data, client side)

Reported by: nickm Owned by:
Priority: High Milestone: Tor: 0.2.3.x-final
Component: Core Tor/Tor Version:
Severity: Keywords: performance roundtrip tor-client
Cc: iang Actual Points:
Parent ID: #1849 Points:
Reviewer: Sponsor:

Description

See proposal 181. See also discussions from June 2011 on tor-dev list, subject line "Proposal: Optimistic Data for Tor: Client Side".

Ian has code for this, though we might still need to implement the "retry as needed" version if it turns out failures are common.

Child Tickets

Attachments (2)

optimistic-client.diff (1.8 KB) - added by nickm 8 years ago.
webfetch-4b-timing.diff (15.9 KB) - added by iang 8 years ago.
Patch for webfetch 5.4.3 to support optimistic data

Download all attachments as: .zip

Change History (21)

comment:1 Changed 8 years ago by nickm

Cc: iang added

Ian, I'm afraid I can't find the code I saw before for this. Could I ask you for a pointer to it?

Changed 8 years ago by nickm

Attachment: optimistic-client.diff added

comment:2 Changed 8 years ago by nickm

I've attached the last version of the patch for this from Ian. At least two changes are needed: It needs to check the version of the exit, and it needs to cache what it's sent for possible reply if the connection fails (See #3565 for that).

comment:3 Changed 8 years ago by nickm

Status: newneeds_review

Needs_review now: see my branch "optimistic-client" in my public repository.

Still also needs a changes file, and a little testing, and maybe a configuration option to turn it off.

comment:4 Changed 8 years ago by nickm

Oh; also it probably wants to have a limit on the amount of optimistic data.

comment:5 in reply to:  4 Changed 8 years ago by iang

Replying to nickm:

Oh; also it probably wants to have a limit on the amount of optimistic data.

You mean smaller than one stream window?

comment:6 Changed 8 years ago by nickm

You mean smaller than one stream window?

I think so, maybe. Does that not seem reasonable to you?

comment:7 in reply to:  6 ; Changed 8 years ago by iang

Replying to nickm:

You mean smaller than one stream window?

I think so, maybe. Does that not seem reasonable to you?

I guess the ideal number is "the amount of data you can send in one RTT", unless you don't care if large uploads stall for a bit (but no worse than they do today).

What you're trading off is the probability the optimism is warranted (the stream does open) against the wasted Tor bandwidth otherwise.

So you can make a conservative choice, picking a limit that will handle any reasonable HTTP GET request, for example, or HTTPS ClientHello, etc., but not large POSTs. ISTR that Google has a public data set of HTTP request sizes. (Or maybe it's HTTP object sizes?)

comment:8 Changed 8 years ago by nickm

Well, large uploads will already stall as soon as they hit the stream window even if we don't limit them, and will stall immediately without optimistic data, so it's no big loss if we have them stall somewhere in the middle instead.

The big win for optimistic data is in protocols where the client opens a connection and makes a request immediately, so that optimistic data saves a round trip. If we choose a maximum size that's larger than any "reasonable" protocol's "typical" request size, it'll still be a win.

My hunch says something like 16K will be big enough to get nearly all the benefit from optimistic data in nearly all cases. Moreover If it's a client-side parameter, we can tune it later without ill effect.

comment:9 in reply to:  7 Changed 8 years ago by iang

Replying to iang:

Replying to nickm:

You mean smaller than one stream window?

I think so, maybe. Does that not seem reasonable to you?

I guess the ideal number is "the amount of data you can send in one RTT", unless you don't care if large uploads stall for a bit (but no worse than they do today).

What you're trading off is the probability the optimism is warranted (the stream does open) against the wasted Tor bandwidth otherwise.

So you can make a conservative choice, picking a limit that will handle any reasonable HTTP GET request, for example, or HTTPS ClientHello, etc., but not large POSTs. ISTR that Google has a public data set of HTTP request sizes. (Or maybe it's HTTP object sizes?)

It's indeed a histogram of the web page sizes (and related features), not GET sizes: https://code.google.com/speed/articles/web-metrics.html

So not as useful for this particular purpose.

comment:10 Changed 8 years ago by nickm

BTW, do you have a good and easy test setup for this code? If so, can you verify whether my hacked-up version still works right for you?

comment:11 Changed 8 years ago by iang

I use a patched webfetch that supports optimistic data. Get webfetch 5.4.3 here:
http://tony.aiu.to/sa/webfetch/

Then apply the patch I'll upload in a minute.

Run it with:

webfetch -S localhost:9060/4b -o /dev/null http://what.ev.er/ -T 3 3>timings.out

Change "4b" to "4a" to use regular SOCKS 4a, and not optimistic data. Be sure you've locked your exit node to one that supports optimistic data, of course.

Changed 8 years ago by iang

Attachment: webfetch-4b-timing.diff added

Patch for webfetch 5.4.3 to support optimistic data

comment:12 Changed 8 years ago by iang

I tested it with my setup, first using CCN3 as an exit node, then using whatever random exit node Tor picked. When CCN3 (running 0.2.3.1-alpha) was the exit node, using "SOCKS 4b" (optimistic data) decreased the time-to-first-byte by 22% as compared to using SOCKS 4a. (K-S p value was less than 10{-9}.) When an arbitrary exit node was picked, using SOCKS 4b and SOCKS 4a did not show a noticeable difference in time-to-first-byte. (K-S p value > 0.36)

This is all as expected. Good stuff.

comment:13 Changed 8 years ago by Sebastian

Great, thanks for testing this! Together with the unit test that just got added to nickm's optimistic-client branch I'm happy with the branch

comment:14 Changed 8 years ago by iang

It definitely still needs the on/off/obey-consensus trit in the torrc file (default to obey-consensus), right?

comment:15 Changed 8 years ago by nickm

Resolution: implemented
Status: needs_reviewclosed

Merged to master; adding a new ticket for configuration stuff.

comment:16 Changed 7 years ago by arma

Keywords: performance added

comment:17 Changed 7 years ago by arma

Keywords: roundtrip added

comment:18 Changed 6 years ago by nickm

Keywords: tor-client added

comment:19 Changed 6 years ago by nickm

Component: Tor ClientTor
Note: See TracTickets for help on using tickets.