Ticket #5263: 0001-Fix-busy-Libevent-loops-infinite-loops-in-Shadow.patch

File 0001-Fix-busy-Libevent-loops-infinite-loops-in-Shadow.patch, 3.3 KB (added by robgjansen, 8 years ago)
  • src/or/main.c

    From 5c649b45978625c1e323c424b8fc32c39427d6c4 Mon Sep 17 00:00:00 2001
    From: "Rob G. Jansen" <jansen@cs.umn.edu>
    Date: Tue, 28 Feb 2012 18:19:49 -0500
    Subject: [PATCH] Fix busy Libevent loops (infinite loops in Shadow)
    
    There is a bug causing busy loops in Libevent and infinite loops in the Shadow simulator. A connection that is marked for close, wants to flush, is held open to flush, but is rate limited (the token bucket is empty) triggers the bug.
    
    This commit fixes the bug. Details are below.
    
    This currently happens on read and write callbacks when the active socket is marked for close. In this case, Tor doesn't actually try to complete the read or write (it returns from those methods when marked), but instead tries to clear the connection with conn_close_if_marked(). Tor will not close a marked connection that contains data: it must be flushed first. The bug occurs when this flush operation on the marked connection can not occur because the connection is rate-limited (its write token bucket is empty).
    
    The fix is to detect when rate limiting is preventing a marked connection from properly flushing. In this case, it should be flagged as read/write_blocked_on_bandwidth and the read/write events de-registered from Libevent. When the token bucket gets refilled, it will check the associated read/write_blocked_on_bandwidth flag, and add the read/write event back to Libevent, which will cause it to fire. This time, it will be properly flushed and closed.
    
    The reason that both read and write events are both de-registered when the marked connection can not flush is because both result in the same behavior. Both read/write events on marked connections will never again do any actual reads/writes, and are only useful to trigger the flush and close the connection. By setting the associated read/write_blocked_on_bandwidth flag, we ensure that the event will get added back to Libevent, properly flushed, and closed.
    
    Why is this important? Every Shadow event occurs at a discrete time instant. If Tor does not properly deregister Libevent events that fire but result in Tor essentially doing nothing, Libevent will repeatedly fire the event. In Shadow this means infinite loop, outside of Shadow this means wasted CPU cycles.
    ---
     src/or/main.c |   14 ++++++++++++++
     1 files changed, 14 insertions(+), 0 deletions(-)
    
    diff --git a/src/or/main.c b/src/or/main.c
    index 9022f2e..d3b8d53 100644
    a b conn_close_if_marked(int i) 
    845845                           "Holding conn (fd %d) open for more flushing.",
    846846                           (int)conn->s));
    847847        conn->timestamp_lastwritten = now; /* reset so we can flush more */
     848      } else if (sz == 0) { /* retval is also 0 */
     849        /* Connection must flush before closing, but its being rate-limited.
     850           Lets remove from Libevent, and mark it as blocked on bandwidth so it
     851           will be re-added on next token bucket refill. Prevents busy Libevent
     852           loops where we keep ending up here and returning 0 until we are no
     853           longer blocked on bandwidth. */
     854        if (connection_is_reading(conn)) {
     855          conn->read_blocked_on_bw = 1;
     856          connection_stop_reading(conn);
     857        }
     858        if (connection_is_writing(conn)) {
     859          conn->write_blocked_on_bw = 1;
     860          connection_stop_writing(conn);
     861        }
    848862      }
    849863      return 0;
    850864    }