Opened 6 years ago

Last modified 3 years ago

#12428 new enhancement

Make it possible to have multiple requests and responses in flight

Reported by: dcf Owned by: dcf
Priority: Medium Milestone:
Component: Circumvention/meek Version:
Severity: Normal Keywords:
Cc: Actual Points:
Parent ID: Points:
Reviewer: Sponsor:


meek segments a data stream into multiple HTTP request–response pairs. In order to keep the segments in order, meek-client strictly serializes requests: it won't issue a second request until after it receives the response to its first request, even if there is buffered data waiting to be sent.

The limit of one latent request–response is restricting possible throughput. For instance, if a user is located 200 ms from App Engine, and receives up to 64 KB per request, then their downstream throughput can be no greater than 64 KB/200 ms = 320 KB/s, even if everything after App Engine were instantaneous. Longer delays lead to even lower throughput.

The problem is how to deal with out-of-order arrivals, and retransmissions when an HTTP transaction fails. My plan is to add sequence numbers and acknowledgements to upstream and downstream HTTP headers, similar to what we did in OSS ( section 4). The seq number is the index of the first byte of a payload in the overall stream. The ack number is the index of the next byte we're expecting from the other side. We can implement this idea in a backward-compatible way, by having the server guess in seq and ack fields when they are missing; old clients that serialize will continue to work.

There's a complication related to the protocol's polling nature. During a big download, we want multiple downstream responses to be in flight. In order to get that, we need to speculatively send a bunch of requests and see if they get responses that have data. My thinking is to do something like TCP congestion avoidance, where we increment the number of speculative probes we send by 1 every time we get a response back with data. Maybe only when we get a full-sized response. Reset the number to 1 when there is a loss event.

Child Tickets

Change History (2)

comment:1 Changed 6 years ago by dcf

It occurs to me that we could probably very easily implement a poor man's version of this. Just send requests whenever, and rely on what we know about how our HTTPS clients like to use a single persistent TCP connection (breaking the abstraction slightly) to keep the chunks in order.

comment:2 Changed 3 years ago by teor

Severity: Normal

Set all open tickets without a severity to "Normal"

Note: See TracTickets for help on using tickets.