Opened 10 months ago

Last modified 5 months ago

#32794 assigned defect

improve OOS (out-of-sockets) handler victim selection and more

Reported by: starlight Owned by: starlight
Priority: Medium Milestone: Tor: 0.4.4.x-final
Component: Core Tor/Tor Version: Tor: 0.4.2.5
Severity: Normal Keywords: security, 044-can
Cc: Actual Points:
Parent ID: Points:
Reviewer: nickm Sponsor:

Description

I find these revisions a benefit. Will create a branch on GitLab desired.

Child Tickets

Attachments (3)

tor-0.4.2.5-oos_select-git.patch (10.9 KB) - added by starlight 10 months ago.
patch vs master and 0.4.2.5
branch_improve-oos-master-bk00.patch (34.2 KB) - added by starlight 9 months ago.
branch_improve-oos-master-rebase-bk00.patch (41.8 KB) - added by starlight 8 months ago.

Download all attachments as: .zip

Change History (30)

Changed 10 months ago by starlight

patch vs master and 0.4.2.5

comment:1 Changed 10 months ago by nickm

Milestone: Tor: 0.4.3.x-final
Status: newneeds_review

comment:2 Changed 10 months ago by dgoulet

Reviewer: nickm

comment:3 Changed 10 months ago by nickm

Hi! I have some comments on the code, but before I get to them, we should talk about the approach.

The new algorithm seems to be

  1. Always keep OR-to-OR connections; always keep directory connections. Only inspect client-to-guard and exit connections.
  1. When discarding connections, discard those that were created most recently.

Is that right? If so, I wonder if there is some way that attacker can exploit this by making a bunch of directory connections, if our directory port is open. Maybe we should consider CONN_TYPE_DIR as well.

I also wonder if the attacker can reduce our number of available sockets by simply attempting a socket exhaustion attack. We'll kill off some of their connections, but we won't kill them all. If the attacker preserves the ones that we don't kill, they will always survive instead of any newer connections that we receive in the future. Can we do any better than this?

(Once we're in agreement here, we should describe the algorithm we want to follow in a patch to tor-spec.txt, so that the correct behavior is documented.)

comment:4 in reply to:  3 ; Changed 10 months ago by starlight

Replying to nickm:

Is that right? If so, I wonder if there is some way that attacker can exploit this by making a bunch of directory connections, if our directory port is open. Maybe we should consider CONN_TYPE_DIR as well.

Good point. I could rework it to sort OR-OR connections to a low priority band, the rest in a high band without excluding any category except listeners. Connection age sort within bands. Open to your thoughts on this.

I also wonder if the attacker can reduce our number of available sockets by simply attempting a socket exhaustion attack. We'll kill off some of their connections, but we won't kill them all. If the attacker preserves the ones that we don't kill, they will always survive instead of any newer connections that we receive in the future. Can we do any better than this?

Have further work where underlying circuits are killed rather than connections. Do you see that as improving this issue? And dynamic configuration of the limits for threshold min, max and soft nofiles.

(Once we're in agreement here, we should describe the algorithm we want to follow in a patch to tor-spec.txt, so that the correct behavior is documented.)

sure!

comment:5 in reply to:  4 Changed 10 months ago by starlight

Replying to starlight:

. . .could rework it to sort OR-OR connections to a low priority band. . .

Hmm. Then someone could create 100000 relays and DOS the network with connections to the good relays.

Perhaps weight the OR-OR connections by consensus weight? The new relays would all start at 20 with days available for mitigation.

comment:6 Changed 10 months ago by starlight

how about

two bands: [OR-OR] and [client-OR, exit, dir]; listeners exempt

but OR-OR connections with a weight lower than a configuration
(in consensus and locally) threshold, perhaps default 100
go in the not OR-OR band, i.e. new/trivial relays do not qualify

OR-OR band, lower kill priority
sorted connection age * weight with lower value higher kill priority

non OR-OR band, higher kill priority
sorted by connection age with newer at higher kill priority

?

Last edited 9 months ago by starlight (previous) (diff)

comment:7 Changed 10 months ago by nickm

I'm thinking that this sounds like it's getting closer to plausible, but I'd want to see pseudocode to be sure. I don't understand how killing circuits instead of connections would help with socket exhaustion, though.

I'm still wondering about the attack where the attacker reduces the number of available sockets. Not sure how bad that would actually be.

comment:8 Changed 10 months ago by nickm

Keywords: security 043-can added

comment:9 Changed 9 months ago by teor

Status: needs_reviewneeds_information

comment:10 Changed 9 months ago by starlight

Instead of writing pseudo code I implemented the algorithm described. Down to neatness-counts items, replacing hard-code constants with configs etc. Busy at work and it might take a couple more weeks--can post current state if desired.

comment:11 Changed 9 months ago by nickm

Keywords: 043-can removed
Milestone: Tor: 0.4.3.x-finalTor: 0.4.4.x-final

Yeah, I'd love to see work-in-progress stuff here, if you're okay sharing it. :)

Changed 9 months ago by starlight

comment:12 Changed 9 months ago by starlight

my todo items are:

enhancements to connection limit logic
  ?support for random upper threshold in a range
  configurable log level for both "Recomputed OOS thresholds" and OOS event messages
  emit warnings when config thresholds ignored (don't change)
  enhanced sort
    ~core logic [complete]
    report
      configurable faint relay consensus threshold
      ~conn stats on one line [complete]
      ~enhance eligible-to-kill count stats, include all categories [complete]
  possibly redundant commit
    "eliminate OOS kill duplicate circuit mark-closed warnings"
    written before push of oos_victim set via circuit connection iterate
      i.e. "if (c->oos_victim) continue;"
Last edited 8 months ago by starlight (previous) (diff)

comment:13 Changed 9 months ago by starlight

The git-am branch in the attachment is off ff9313340; rebased last weekend. 0.4.2.5 is slightly different now with an alternate "configurable parameters apply/revert logic for OOS handler".

Last edited 9 months ago by starlight (previous) (diff)

Changed 8 months ago by starlight

comment:14 Changed 8 months ago by starlight

rebased to 21f45197a

comment:15 Changed 8 months ago by nickm

Status: needs_informationneeds_review

comment:16 Changed 8 months ago by starlight

still have a few minor loose ends to tie-up (per todo above); rather than a comprehensive pre-merge review at this stage an examination of the pick_oos_victims() algorithm rewrite will be helpful and appreciated

Last edited 8 months ago by starlight (previous) (diff)

comment:17 Changed 8 months ago by nickm

So as I understand it, the proposed new algorithm is:

Consider only edge connections and OR connections with no identity set.

Close the newest N such connections, until we have regained enough sockets.

There are some problems here that we should think about. They all stem from the fact that an attacker is not required to do the kind of DoS attack that we expect: the attacker will know our algorithm, so they can adjust their attack to work around it.

  1. If we have a DirPort open, the attacker can open connections to our DirPort: so we should also consider DirPort connections that have sockets set. (The fix for this one is easy: just check Directory connections too.)
  1. Checking whether a connection's identity_digest is zero will not always do what we want. First, bridges do not set their identity digest, even though a bridge may have circuits from multiple users. Second, any client can pretend to be a relay and provide authentication when it connects to us, thereby setting an identity digest. (This one is harder to fix: we could look for relays that are in the consensus, but a relay that is not in the consensus might just be a new one that we don't know about yet. I don't know a supported way to detect bridges -- there isn't supposed to be one, really. We could look at the number of circuits, perhaps?)
  1. The attacker is not required to flood us with connections: they can send a trickle instead. Instead of opening a whole bunch of connections at once, the attacker can open a new connection every 5 minutes. This will still eat up all of our sockets over time, but when we go to close the newest ones, the attacker will still have a bunch of our capacity. (I do not know the right fix for this. We could randomize the algorithm, I guess?)

comment:18 Changed 8 months ago by nickm

Owner: set to starlight
Status: needs_reviewassigned

comment:19 in reply to:  17 ; Changed 8 months ago by teor

Replying to nickm:

So as I understand it, the proposed new algorithm is:

Consider only edge connections and OR connections with no identity set.

Close the newest N such connections, until we have regained enough sockets.

There are some problems here that we should think about. They all stem from the fact that an attacker is not required to do the kind of DoS attack that we expect: the attacker will know our algorithm, so they can adjust their attack to work around it.

  1. If we have a DirPort open, the attacker can open connections to our DirPort: so we should also consider DirPort connections that have sockets set. (The fix for this one is easy: just check Directory connections too.)
  1. Checking whether a connection's identity_digest is zero will not always do what we want. First, bridges do not set their identity digest, even though a bridge may have circuits from multiple users. Second, any client can pretend to be a relay and provide authentication when it connects to us, thereby setting an identity digest. (This one is harder to fix: we could look for relays that are in the consensus, but a relay that is not in the consensus might just be a new one that we don't know about yet. I don't know a supported way to detect bridges -- there isn't supposed to be one, really. We could look at the number of circuits, perhaps?)

It's also worth thinking about onion services and single onion services here. A busy onion service may look similar to a bridge, from the perspective of the upstream hop: both open lots of circuits.

Also, bridges and onion services can experience a socket DoS, too. We should think about how this algorithm might work for them, even if we don't activate it right now.

  1. The attacker is not required to flood us with connections: they can send a trickle instead. Instead of opening a whole bunch of connections at once, the attacker can open a new connection every 5 minutes. This will still eat up all of our sockets over time, but when we go to close the newest ones, the attacker will still have a bunch of our capacity. (I do not know the right fix for this. We could randomize the algorithm, I guess?)

I think randomising the sockets we close is the hardest algorithm to exploit, because the attacker can't know which sockets were going to close next.

We may want to assign a lower probability to sockets that we have recently opened to fetch directory documents, and connections on which we are currently fetching directory documents. (Attackers can occupy these sockets using a slowloris attack, so we should still be prepared to close them, if we have a lot of them open.)

We should also assign a threshold value, so we keep a few directory sockets. (150 seems like a good threshold for relays, because they do approximately 7000 relays / 96 descriptors per request * 2 requests for descriptors, when they don't have any cached descriptors.)

Remember, relays can use remote DirPorts and ORPorts for directory fetches, the code should handle both.

We should also try to think of any other kinds of essential sockets, that we don't want to close.

comment:20 in reply to:  17 Changed 8 months ago by starlight

Replying to nickm:

  1. If we have a DirPort open, the attacker can open connections to our DirPort: so we should also consider DirPort connections that have sockets set. (The fix for this one is easy: just check Directory connections too.)

hi! This is implemented. . .can add more comments to pick_oos_victims() if desired.

  1. Checking whether a connection's identity_digest is zero will not always do what we want. First, bridges do not set their identity digest, even though a bridge may have circuits from multiple users.

ok, how to tell if it's a bridge?

Second, any client can pretend to be a relay and provide authentication when it connects to us, thereby setting an identity digest. (This one is harder to fix: we could look for relays that are in the consensus, but a relay that is not in the consensus might just be a new one that we don't know about yet. I don't know a supported way to detect bridges -- there isn't supposed to be one, really. We could look at the number of circuits, perhaps?)

but can "any client" set the digest and be in the consensus with a stable flag, a cbw 500 or higher? Perhaps I should add some more comments.

  1. The attacker is not required to flood us with connections: they can send a trickle instead. Instead of opening a whole bunch of connections at once, the attacker can open a new connection every 5 minutes. This will still eat up all of our sockets over time, but when we go to close the newest ones, the attacker will still have a bunch of our capacity. (I do not know the right fix for this. We could randomize the algorithm, I guess?)

Adding randomness while retaining some degree of time priority in band A, age*cbw in band B makes sense to me.

Last edited 8 months ago by starlight (previous) (diff)

comment:21 Changed 8 months ago by teor

Is there an overview of this design somewhere?

It sounds like this change needs a proposal, or a good description of the algorithm used for choosing sockets.

comment:22 in reply to:  19 Changed 8 months ago by starlight

Replying to teor:

It's also worth thinking about onion services and single onion services here. A busy onion service may look similar to a bridge, from the perspective of the upstream hop: both open lots of circuits.

Will appreciate some big picture help on how to figure bridges and single onions services, can drill into the details on my own if I have a general picture.

Also, bridges and onion services can experience a socket DoS, too. We should think about how this algorithm might work for them, even if we don't activate it right now.

ok

I think randomising the sockets we close is the hardest algorithm to exploit, because the attacker can't know which sockets were going to close next.

sure, agree next comment above

We may want to assign a lower probability to sockets that we have recently opened to fetch directory documents, and connections on which we are currently fetching directory documents. (Attackers can occupy these sockets using a slowloris attack, so we should still be prepared to close them, if we have a lot of them open.)

something like a time-decaying rate as a negative priority factor, with a countervailing longer-horizon-and-higher-consumption positive priority factor?

We should also assign a threshold value, so we keep a few directory sockets. (150 seems like a good threshold for relays, because they do approximately 7000 relays / 96 descriptors per request * 2 requests for descriptors, when they don't have any cached descriptors.)

Have some hard data that suggest this may be unnecessary, can share privately.

Remember, relays can use remote DirPorts and ORPorts for directory fetches, the code should handle both.

Again, a few big picture hints on how to figure will help.

We should also try to think of any other kinds of essential sockets, that we don't want to close.

In the current implementation, only OR, DIR and EXIT connection types are considered--all other types are exempt.

Please take a fifteen minutes to read the one function pick_oos_victims().

comment:23 in reply to:  21 ; Changed 8 months ago by starlight

Replying to teor:

Is there an overview of this design somewhere?

above in comment:6

It sounds like this change needs a proposal, or a good description of the algorithm used for choosing sockets.

I'm a write code that works, then write the RFC kind -- similar to the folks who created the Internet (and Tor).

comment:24 Changed 8 months ago by starlight

A bit of context:

I wrote this as quick mitigation to an issue, and it works (very well) in combination with some other mitigations. I've thought about it and do not necessarily see it as particularly great, just way way better then what it replaces and I don't advocate activating OOS by default. The supporting changes to permit dynamic configuration of limits are nice.

I have a bunch of much better and more important ideas I want to pursue and don't want to spend a whole lot more effort on this one. Please keep this in mind. I'm willing to improve it marginally, but if it turns into a time sink you've lost me.

comment:25 in reply to:  23 Changed 8 months ago by teor

Replying to starlight:

Replying to teor:

It's also worth thinking about onion services and single onion services here. A busy onion service may look similar to a bridge, from the perspective of the upstream hop: both open lots of circuits.

Will appreciate some big picture help on how to figure bridges and single onions services, can drill into the details on my own if I have a general picture.

Bridges and onion services try to look like clients, for anonymity reasons. If you find a reliable distinguisher, we'll try to fix it, because it's a security issue:

Replying to nickm:

  1. Checking whether a connection's identity_digest is zero will not always do what we want. First, bridges do not set their identity digest, even though a bridge may have circuits from multiple users.

However, busy bridges and onion services should only have one connection to your relay. So they shouldn't be taking up very many sockets at all.

I think it's ok to have a bucket that's [client-OR (including onion services, bridges), exit, dir, OR-OR low consensus weight]. We just need to document where the onion services and bridges go, so people don't assume they're protected (like [OR-OR good consensus weight]).

Replying to starlight:

Replying to teor:

Also, bridges and onion services can experience a socket DoS, too. We should think about how this algorithm might work for them, even if we don't activate it right now.

Bridges can use two buckets: [bridge-OR outbound] and [OR-bridge inbound, clients, onion services]. Bridges don't support exiting or DirPorts. OR-bridge inbound connections are reachability circuits, or a DoS via another relay. So they are not important.

Onion services shouldn't have socket issues, because they use guards.

Single onion services could also use two buckets: [long-term intro, directory fetches, HSDir posts] and [rendezvous]. Rendezvous connections are a big DoS risk. Keeping the long-term intro connections, directory fetches, and HSDir posts is important to keep the service online.

You don't have to make these changes, but the code should be designed so it's easy to change the way we filter connections. (You don't have to do a redesign, either - we can do that if we decided to merge.)

We may want to assign a lower probability to sockets that we have recently opened to fetch directory documents, and connections on which we are currently fetching directory documents. (Attackers can occupy these sockets using a slowloris attack, so we should still be prepared to close them, if we have a lot of them open.)

something like a time-decaying rate as a negative priority factor, with a countervailing longer-horizon-and-higher-consumption positive priority factor?

Directory fetches will either be OR-OR, or be an outbound directory fetch.

So we could do:

[OR-OR good consensus weight, outbound directory fetches], and [client-OR (including onion services, bridges), exit, inbound DirPort, OR-OR low consensus weight].

Remember, relays can use remote DirPorts and ORPorts for directory fetches, the code should handle both.

Again, a few big picture hints on how to figure will help.

I think dir_connection_t.dirconn_direct is pretty much what you want here:

https://github.com/torproject/tor/blob/master/src/feature/dircommon/dir_connection_st.h#L31

Again, you don't have to make that change, be we should make it before we merge.

Replying to starlight:

Replying to teor:

Is there an overview of this design somewhere?

above in comment:6

It sounds like this change needs a proposal, or a good description of the algorithm used for choosing sockets.

I'm a write code that works, then write the RFC kind -- similar to the folks who created the Internet (and Tor).

Fair enough, but tor does have a proposals process now :-)

You don't have to write the proposal, or the documentation. But someone should at least summarise the design before we merge.

comment:26 Changed 5 months ago by nickm

Keywords: 044-must added

Add 044-must to all security tickets in 0.4.4

comment:27 Changed 5 months ago by nickm

Keywords: 044-can added; 044-must removed
Note: See TracTickets for help on using tickets.