Ticket #2536: 0001-Add-Token-Bucket-proposal-idea.patch

File 0001-Add-Token-Bucket-proposal-idea.patch, 8.3 KB (added by karsten, 9 years ago)

[PATCH] Add Token Bucket proposal idea.

  • new file proposals/ideas/xxx-tokenbucket.txt

    From 96e3caaa9a4a757c73677d179518724455340247 Mon Sep 17 00:00:00 2001
    From: Karsten Loesing <karsten.loesing@gmx.net>
    Date: Fri, 25 Mar 2011 10:58:51 +0100
    Subject: [PATCH] Add Token Bucket proposal idea.
     proposals/ideas/xxx-tokenbucket.txt |  136 +++++++++++++++++++++++++++++++++++
     1 files changed, 136 insertions(+), 0 deletions(-)
     create mode 100644 proposals/ideas/xxx-tokenbucket.txt
    diff --git a/proposals/ideas/xxx-tokenbucket.txt b/proposals/ideas/xxx-tokenbucket.txt
    new file mode 100644
    index 0000000..9128c9b
    - +  
     2Filename: xxx-tokenbucket.txt
     3Title: Token Bucket
     4Author: Florian Tschorsch and Björn Scheuermann
     5Created: 03-Dec-2010
     6Status: Draft / Open
     10  The following proposal targets the reduction of queuing times in onion
     11  routers. In particular, we focus on the token bucket algorithm in Tor and
     12  point out that current usage unnecessarily locks cells for long time spans.
     13  We propose a non-intrusive change in Tor's design which overcomes the
     14  deficiencies.
     16Motivation and Background:
     18  Cell statistics from the Tor network [1] reveal that cells reside in
     19  individual onion routers' cell queues for up to several seconds. These
     20  queuing times increase the end-to-end delay very significantly and are
     21  apparently the largest contributor to overall cell latency in Tor.
     23  In Tor there exist multiple token buckets on different logical levels. They
     24  all work independently. They are used to limit the up- and downstream of an
     25  onion router. All token buckets are refilled every second with a constant
     26  amount of tokens that depends on the configured bandwidth limits. For
     27  example, the so-called RelayedTokenBucket limits relay traffic only. All
     28  read data of incoming connections are bound to a dedicated read token
     29  bucket. An analogous mechanism exists for written data leaving the onion
     30  router. We were able to identify the specific usage and implementation of
     31  the token bucket algorithm as one cause for very high (and unnecessary)
     32  queuing times in an onion router.
     34  First, we observe that the token buckets in Tor are (surprisingly at a first
     35  glance) allowed to take on negative fill levels. This is justified by the
     36  TLS connections between onion routers where whole TLS records need to be
     37  processed. The token bucket on the incoming side (i.e., the one which
     38  determines at which rate it is allowed to read from incoming TCP
     39  connections) in particular often runs into non-negligible negative fill
     40  levels. As a consequence of this behavior, sometimes slightly more data is
     41  read than it would be admissible upon strict interpretation of the token
     42  bucket concept.
     44  However, the token bucket for limiting the outgoing rate does not take on
     45  negative fill levels equally often. Consequently, it regularly happens
     46  that somewhat more data are read on the incoming side than the outgoing
     47  token bucket allows to be written during the same cycle, even if their
     48  configured data rates are the same. The respective cells will thus not be
     49  allowed to leave the onion router immediately. They will thus necessarily
     50  be queued for at least as long as it takes until the token bucket on the
     51  outgoing side is refilled again. The refill interval currently is, as
     52  mentioned before, one second -- so, these cells are delayed for a very
     53  substantial time. In summary, one could say that the two buckets, on the
     54  incoming and outgoing side, work like a double door system and frequently
     55  lock cells for a full token bucket refill interval length.
     57  Apart from the above described effects, it should be noted that the very
     58  coarse-grained refill interval of one second also has other detrimental
     59  effects. First, consider an onion router with multiple TLS connections over
     60  which cells arrive. If there is high activity (i.e., many incoming cells in
     61  total), then the coarse refill interval will cause unfairness. Assume (just
     62  for simplicity) that C doesn't share its TLS connection with any other
     63  circuit. Moreover, assume that C hasn't transmitted any data for some time
     64  (e.g., due a typical bursty HTTP traffic pattern). Consequently, there are
     65  no cells from this circuit in the incoming socket buffers. When the buckets
     66  are refilled, the incoming token bucket will immediately spend all its
     67  tokens on other incoming connections. Now assume that cells from C arrive
     68  soon after. For fairness' sake, these cells should be serviced timely --
     69  circuit C hasn't received any bandwidth for a significant time before.
     70  However, it will take a very long time (one refill interval) before the
     71  current implementation will fetch these cells from the incoming TLS
     72  connection, because the token bucket will remain empty for a long time. Just
     73  because the cells happened to arrive at the "wrong" point in time, they must
     74  wait. Such situations may occur even though the configured admissible
     75  incoming data rate is not exceeded by incoming cells: the long refill
     76  intervals often lead to an operational state where all the cells that were
     77  admissible during a given one-second period are queued until the end of this
     78  second, before the onion router even just starts processing them. This
     79  results in unnecessary, long queueing delays in the incoming socket buffers.
     80  These delays are in *addition* to the above discussed queueing delays in the
     81  circuit buffers. Because they occur in a different buffer, the socket buffer
     82  queueing times are not visible in the Tor circuit queue delay statistics [1].
     84  Finally, the coarse-grained refill intervals result in a very bursty outgoing
     85  traffic pattern at the onion routers (one large chunk of data once per
     86  second, instead of smooth transmission progress). This is undesirable, since
     87  such a traffic pattern can interfere with TCP's control mechanisms and can
     88  be the source of suboptimal TCP performance on the TLS links between onion
     89  routers.
     93  In order to overcome the described problems, we propose two changes related
     94  to the token bucket algorithm.
     96  First, we observe that the token bucket for the relayed traffic on the
     97  outgoing connections is unnecessary: since no new such traffic is generated
     98  in an onion router, the rate of this traffic is already limited by the read
     99  bucket on the incoming side (cp. RelayedTokenBucket). We therefore propose
     100  to remove the rate limiting mechanism on the outgoing side. This will
     101  eliminate the "double door effect" discussed above, since all cells are
     102  allowed to flow freely out of the router once they passed the incoming rate
     103  limiter.
     105  Second, the refill interval of the buckets should be shortened. The
     106  remaining token buckets should be refilled more often, with a
     107  correspondingly smaller amount of tokens. For instance, the buckets might
     108  be refilled every 10 milliseconds with one-hundredth of the amount of data
     109  admissible per second. This will help to overcome the problem of unfairness
     110  when reading from the incoming socket buffers. At the same time it smoothes
     111  the traffic leaving the onion routers. We are aware that this latter change
     112  has apparently been discussed before [2]; we are not sure why this change
     113  has not been implemented yet.
     117  The proposed measures are very simple to implement, but nevertheless a
     118  significant reduction of cell queueing times can be expected. Experiments
     119  which we performed with a patched onion router software revealed that
     120  the CPU utilization of an onion router is not significantly
     121  impacted by the reduction of the refill interval length and that cell
     122  queueing times are indeed significantly shorter.
     124  The presented design proposal is minimally intrusive and does not
     125  fundamentally change the current Tor design. It is therefore highly
     126  migratable into the existing architecture. Onion routers can be updated
     127  independently. As more onion routers use a changed version, gradual
     128  performance improvements can be expected. We believe that our contribution
     129  can improve Tor's performance substantially.
     131  Feedback is highly appreciated.
     135  [1] Karsten Loesing. Analysis of Circuit Queues in Tor. August 25, 2009.
     136  [2] https://trac.torproject.org/projects/tor/wiki/sponsors/SponsorD/June2011
     137 No newline at end of file