Opened 14 months ago

Closed 5 months ago

#22400 closed defect (fixed)

We bootstrap from different primary guards when we start with a non-live consensus and not enough guards in the state file

Reported by: arma Owned by: nickm
Priority: Medium Milestone: Tor: 0.3.3.x-final
Component: Core Tor/Tor Version: Tor: 0.3.0.7
Severity: Normal Keywords: tor-client, tor-guard, bootstrap, s8-errors
Cc: asn, s7r, arma, catalyst Actual Points:
Parent ID: Points: 3
Reviewer: asn Sponsor: Sponsor8-can

Description (last modified by arma)

I started my Tor client (0.3.1.1-alpha (git-ab9976b7245f05a9)) with these three guards (and only these three guards) in my state file:

Guard in=default rsa_id=BB119A5A4D5DA2BBB7B796ECC50E3C0F1D4FD910 nickname=onslaught sampled_on=2017-04-29T09:28:05 sampled_by=0.3.1.0-alpha-dev listed=1 confirmed_on=2017-05-02T22:48:40 confirmed_idx=1
Guard in=default rsa_id=7369A0E14C59B11D78357EC5AFE573A259264BBD nickname=yono1 sampled_on=2017-05-02T20:44:35 sampled_by=0.3.1.0-alpha-dev listed=1 confirmed_on=2017-04-25T13:14:11 confirmed_idx=0 pb_use_attempts=3.000000 pb_use_successes=3.000000 pb_circ_attempts=105.000000 pb_circ_successes=105.000000 pb_successful_circuits_closed=95.000000 pb_collapsed_circuits=10.000000 pb_timeouts=8.000000
Guard in=default rsa_id=62DA0256BBC28992D41CBAFB549FFD7C9B846A99 nickname=IRejectTorFoundatnN sampled_on=2017-05-21T02:25:50 sampled_by=0.3.1.0-alpha-dev listed=1 confirmed_on=2017-05-23T22:28:05 confirmed_idx=2

I figured that since there were three, and they were all confirmed, they would be my primary guards.

Also, my cached-microdesc-consensus file was about a day old.

When Tor starts, it says

May 25 20:58:06.155 [info] sampled_guards_update_from_consensus(): No live consensus; can't update sampled entry guards.

Ok, great, it's not going to go deleting or modifying any of my guards before it's even tried to load my consensus file from disk. Makes sense.

Then it goes through to update my three guards to say that they're not working:

May 25 20:58:06.155 [debug] entry_guard_set_filtered_flags(): Updated sampled guard onslaught ($BB119A5A4D5DA2BBB7B796ECC50E3C0F1D4FD910): filtered=0; reachable_filtered=0.
May 25 20:58:06.155 [debug] entry_guard_set_filtered_flags(): Updated sampled guard yono1 ($7369A0E14C59B11D78357EC5AFE573A259264BBD): filtered=0; reachable_filtered=0.
May 25 20:58:06.156 [debug] entry_guard_set_filtered_flags(): Updated sampled guard IRejectTorFoundatnN ($62DA0256BBC28992D41CBAFB549FFD7C9B846A99): filtered=0; reachable_filtered=0.

I'm guessing that happened because of entry_guards_update_filtered_sets() which got called from entry_guards_update_all(), but there are several possible paths to that function so I'm not sure which one it was. Maybe it's decided they're all down, since it hasn't even loaded a consensus yet, so they're all missing from the nonexistent consensus?

Then we get to:

May 25 20:58:06.156 [info] sample_reachable_filtered_entry_guards(): Trying to sample a reachable guard: We know of 0 in the USABLE_FILTERED set.
May 25 20:58:06.156 [info] sample_reachable_filtered_entry_guards():   (That isn't enough. Trying to expand the sample.)
May 25 20:58:06.156 [info] entry_guards_expand_sample(): Expanding the sample guard set. We have 3 guards in the sample, and 0 eligible guards to extend it with.
May 25 20:58:06.156 [info] entry_guards_expand_sample(): Not expanding the guard sample any further; just ran out of eligible guards
May 25 20:58:06.156 [info] sample_reachable_filtered_entry_guards():   (After filters [b], we have 0 guards to consider.)

Ok, great, we refuse to expand the guard list now. That's good because we don't have any consensus loaded yet.

Then I finish loading the state file, and load other stuff from my datadirectory, like the cached-microdesc-consensus file:

May 25 20:58:06.644 [info] A consensus needs 5 good signatures from recognized authorities for us to accept it. This one has 8 (dannenberg tor26 longclaw maatuska moria1 dizum gabelmoo Faravahar).
May 25 20:58:06.797 [info] microdesc_cache_reload(): Reloaded microdescriptor cache. Found 7298 descriptors.
May 25 20:58:06.812 [info] update_consensus_networkstatus_fetch_time_impl(): No live microdesc consensus; we should fetch one immediately.
May 25 20:58:06.812 [info] cell_ewma_set_scale_factor(): Enabled cell_ewma algorithm because of value in CircuitPriorityHalflifeMsec in consensus; scale factor is 0.793701 per 10 seconds
May 25 20:58:07.046 [info] networkstatus_set_current_consensus(): Loaded an expired consensus. Discarding.
May 25 20:58:07.046 [info] router_reload_consensus_networkstatus(): Couldn't load unverified consensus microdesc networkstatus from cache

Ok, fine, it's expired so we discarded it, no problem.

May 25 20:58:07.182 [info] update_consensus_bootstrap_attempt_downloads(): Launching microdesc bootstrap mirror networkstatus consensus download.
May 25 20:58:07.182 [notice] Starting with guard context "default"
May 25 20:58:07.182 [info] sample_reachable_filtered_entry_guards(): Trying to sample a reachable guard: We know of 0 in the USABLE_FILTERED set.
May 25 20:58:07.182 [info] sample_reachable_filtered_entry_guards():   (That isn't enough. Trying to expand the sample.)

Sounds good, I agree it's time to go get a fresh consensus.

May 25 20:58:07.182 [info] entry_guards_expand_sample(): Expanding the sample guard set. We have 3 guards in the sample, and 2220 eligible guards to extend it with.

Wait, what? Where did these 2220 eligible guards come from?

Did they get added to a list of potential eligible guards, when we were examining the old microdesc consensus -- the one that we then discarded because it was too old?

That theory sounds plausible:

$ grep "^s " cached-microdesc-consensus |grep Guard|grep V2Dir|wc -l
2224

That's bug number one.

So we proceed to add an eligible guard, since I have 0 eligible guards in my set:

May 25 20:58:07.182 [debug] compute_weighted_bandwidths(): Generated weighted bandwidths for rule weight as guard based on weights Wg=0.594700 Wm=0.594700 We=0.000000 Wd=0.000000 with total bw 0
May 25 20:58:07.182 [info] entry_guard_add_to_sample(): Adding $9285B22F7953D7874604EEE2B470609AD81C74E9~0x3d005 at 62.138.7.171 as to the entry guard sample set.
May 25 20:58:07.182 [debug] entry_guard_set_filtered_flags(): Updated sampled guard 0x3d005 ($9285B22F7953D7874604EEE2B470609AD81C74E9): filtered=1; reachable_filtered=1.

In fact, we add 20 of them like this. And these ones are filtered=1, reachable_filtered=1, because we just picked them out of, I guess, a consensus that said they were Running.

After we add the 20, we hit:

May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards(): Trying to sample a reachable guard: We know of 20 in the USABLE_FILTERED set.
May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (After filters [b], we have 20 guards to consider.)
May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (Selected 0x3d005 ($9285B22F7953D7874604EEE2B470609AD81C74E9).)
May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards(): Trying to sample a reachable guard: We know of 20 in the USABLE_FILTERED set.
May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (After filters [b], we have 19 guards to consider.)
May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (Selected colosimo ($51939625169E2C7E0DC83D38BAE628BDE67E9A22).)
May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards(): Trying to sample a reachable guard: We know of 20 in the USABLE_FILTERED set.
May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (After filters [b], we have 18 guards to consider.)
May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (Selected Tungurahua ($7CB15FA5CD985B9BBD6AC386C62C206A1F391C23).)

Ok, we've picked three. Then we see:

May 25 20:58:07.185 [info] entry_guards_update_primary(): Primary entry guards have changed. New primary guard list is: 
May 25 20:58:07.185 [info] entry_guards_update_primary():   1/3: 0x3d005 ($9285B22F7953D7874604EEE2B470609AD81C74E9)
May 25 20:58:07.185 [info] entry_guards_update_primary():   2/3: colosimo ($51939625169E2C7E0DC83D38BAE628BDE67E9A22)
May 25 20:58:07.185 [info] entry_guards_update_primary():   3/3: Tungurahua ($7CB15FA5CD985B9BBD6AC386C62C206A1F391C23)

I had three perfectly fine primary guards earlier, but I had marked them down, so I guess we have three new ones.

But then we proceed to ignore those three new ones too!

May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (After filters [7], we have 17 guards to consider.)
May 25 20:58:07.185 [info] sample_reachable_filtered_entry_guards():   (Selected Yuman ($3265FB6279E142BFBB8BE08AA0582011D23ED2EF).)
May 25 20:58:07.185 [info] select_entry_guard_for_circuit(): No primary or confirmed guards available. Selected random guard Yuman ($3265FB6279E142BFBB8BE08AA0582011D23ED2EF) for circuit. Will try other guards before using this circuit.

That's bug number two.

It looks like it launches a consensus download attempt to Yuman smoothly:

May 25 20:58:07.185 [info] directory_send_command(): Downloading consensus from 37.187.177.2:9001 using /tor/status-vote/current/consensus-microdesc/0232AF+14C131+23D15D+49015F+D586D1+E8A9C4+ED03BB+EFCBE7.z

And it launches a parallel fetch to an authority too, just like it's supposed to:

May 25 20:58:07.185 [info] directory_send_command(): Downloading consensus from 154.35.175.225:443 using /tor/status-vote/current/consensus-microdesc/0232AF+14C131+23D15D+49015F+D586D1+E8A9C4+ED03BB+EFCBE7.z

Faravahar is way more responsive than whoever Yuman is, so by the time I get my one-hop circuit open to Yuman, I'm already partway through fetching the consensus from Faravahar:

May 25 20:58:07.824 [info] internal circ (length 1, last hop Yuman): $3265FB6279E142BFBB8BE08AA0582011D23ED2EF(open)
May 25 20:58:07.824 [debug] command_process_created_cell(): Moving to next skin.
May 25 20:58:07.824 [debug] circuit_send_next_onion_skin(): starting to send subsequent skin.
May 25 20:58:07.824 [info] circuit_send_next_onion_skin(): circuit built!
May 25 20:58:07.824 [info] connection_ap_handshake_attach_circuit(): Closing extra consensus fetch (to 37.187.177.2) since one is already downloading.

But then

May 25 20:58:07.825 [debug] conn_close_if_marked(): Cleaning up connection (fd -1).
May 25 20:58:07.825 [info] entry_guards_note_guard_failure(): Recorded failure for guard Yuman ($3265FB6279E142BFBB8BE08AA0582011D23ED2EF)

That's not very nice of us, since the guard didn't actually fail, we just opted not to use that circuit because another circuit was faster.

That's bug number three.

Ok, after a bunch of lines, I get my microdesc consensus from Faravahar:

May 25 20:58:16.296 [debug] Received response from directory server '154.35.175.225:443': 200 "OK" (purpose: 14, response size: 629424, compression: 2)

And we start acting on it:

May 25 20:58:16.764 [info] launch_descriptor_downloads(): Launching 5 requests for 2262 microdescs, 500 at a time
May 25 20:58:16.766 [info] select_entry_guard_for_circuit(): Selected primary guard Tungurahua ($7CB15FA5CD985B9BBD6AC386C62C206A1F391C23) for circuit.

and we happily fetch the missing microdescriptors using our 3 (new) primary guards.

There is this interesting tidbit:

May 25 20:58:18.032 [debug] fetch_from_buf_http(): headerlen 197, bodylen 127544.
May 25 20:58:18.032 [debug] Received response from directory server '109.236.90.209:443': 200 "OK" (purpose: 19, response size: 127741, compression: 2)
May 25 20:58:18.032 [info] make_guard_confirmed(): Marking colosimo ($51939625169E2C7E0DC83D38BAE628BDE67E9A22) as a confirmed guard (index 3)
May 25 20:58:18.032 [info] entry_guards_update_primary(): Primary entry guards have changed. New primary guard list is: 
May 25 20:58:18.032 [info] entry_guards_update_primary():   1/3: colosimo ($51939625169E2C7E0DC83D38BAE628BDE67E9A22) (confirmed)
May 25 20:58:18.032 [info] entry_guards_update_primary():   2/3: 0x3d005 ($9285B22F7953D7874604EEE2B470609AD81C74E9)
May 25 20:58:18.032 [info] entry_guards_update_primary():   3/3: Tungurahua ($7CB15FA5CD985B9BBD6AC386C62C206A1F391C23)
May 25 20:58:18.032 [info] entry_guards_note_guard_success(): Recorded success for primary confirmed guard colosimo ($51939625169E2C7E0DC83D38BAE628BDE67E9A22)

We marked our guard confirmed because we used it to fetch directory info? I believe that's what we programmed it to do, but maybe our definition of confirmed should be that we attached a 'real' (non dir fetch) stream to a 'real' (3 hop or more) circuit.

(We can call that bug number four if we like, but it's really more like feature request number one.)

Eventually we get enough microdescs to convince us we should start making circuits:

May 25 20:58:18.453 [debug] compute_frac_paths_available(): f_guard: 0.86, f_mid: 0.86, f_exit: 0.88
May 25 20:58:18.453 [notice] Bootstrapped 80%: Connecting to the Tor network
May 25 20:58:18.454 [debug] update_guard_selection_choice(): Staying with guard context "default" (no change)
May 25 20:58:18.454 [info] sampled_guards_update_from_consensus(): Updating sampled guard status based on received consensus.
May 25 20:58:18.454 [debug] sampled_guards_update_from_consensus(): Sampled guard onslaught ($BB119A5A4D5DA2BBB7B796ECC50E3C0F1D4FD910) is still listed.
May 25 20:58:18.454 [debug] sampled_guards_update_from_consensus(): Sampled guard yono1 ($7369A0E14C59B11D78357EC5AFE573A259264BBD) is still listed.

and we see those sampled_guards_update_from_consensus() lines for each of our 23 of our guards, followed by 23 of these lines, one for each guard:

May 25 20:58:18.454 [debug] entry_guard_set_filtered_flags(): Updated sampled guard onslaught ($BB119A5A4D5DA2BBB7B796ECC50E3C0F1D4FD910): filtered=1; reachable_filtered=1.
May 25 20:58:18.454 [debug] entry_guard_set_filtered_flags(): Updated sampled guard yono1 ($7369A0E14C59B11D78357EC5AFE573A259264BBD): filtered=1; reachable_filtered=1.

And then, oh hey, we get our original primary guards back!

May 25 20:58:18.455 [info] entry_guards_update_primary(): Primary entry guards have changed. New primary guard list is: 
May 25 20:58:18.455 [info] entry_guards_update_primary():   1/3: yono1 ($7369A0E14C59B11D78357EC5AFE573A259264BBD) (confirmed)
May 25 20:58:18.455 [info] entry_guards_update_primary():   2/3: onslaught ($BB119A5A4D5DA2BBB7B796ECC50E3C0F1D4FD910) (confirmed)
May 25 20:58:18.455 [info] entry_guards_update_primary():   3/3: IRejectTorFoundatnN ($62DA0256BBC28992D41CBAFB549FFD7C9B846A99) (confirmed)

And then we build a 3-hop circuit with yono1, as expected:

May 25 20:58:19.738 [info] exit circ (length 3, last hop digineo3): $7369A0E14C59B11D78357EC5AFE573A259264BBD(open) $0269165034C5586D6551CCCFA8940D2305948754(
open) $B21211A1A2C68F2D9E57E3C7AEAF4F04AFC10E7F(open)
May 25 20:58:19.738 [info] entry_guards_note_guard_success(): Recorded success for primary confirmed guard yono1 ($7369A0E14C59B11D78357EC5AFE573A259264BBD)
May 25 20:58:19.738 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.

Child Tickets

Attachments (3)

tord-log.gz (142.7 KB) - added by arma 14 months ago.
the debug level log
cached-microdesc-consensus.gz (616.5 KB) - added by arma 14 months ago.
the cached-microdesc-consensus I started it with
state (2.0 KB) - added by arma 14 months ago.
the state file I started it with

Download all attachments as: .zip

Change History (29)

Changed 14 months ago by arma

Attachment: tord-log.gz added

the debug level log

Changed 14 months ago by arma

the cached-microdesc-consensus I started it with

Changed 14 months ago by arma

Attachment: state added

the state file I started it with

comment:1 Changed 14 months ago by arma

Description: modified (diff)

comment:2 Changed 14 months ago by arma

Description: modified (diff)

comment:3 Changed 14 months ago by arma

Description: modified (diff)

comment:4 in reply to:  description ; Changed 14 months ago by teor

Description: modified (diff)

Replying to arma:

And it launches a parallel fetch to an authority too, just like it's supposed to:

May 25 20:58:07.185 [info] directory_send_command(): Downloading consensus from 154.35.175.225:443 using /tor/status-vote/current/consensus-microdesc/0232AF+14C131+23D15D+49015F+D586D1+E8A9C4+ED03BB+EFCBE7.z

No, that's a bug: #20604.

There's no need for every client to contact an authority: they didn't in 0.2.8, and they all worked fine. But when the exponential backoff code was merged, there was no provision for a set initial delay, and we couldn't define the schedules so that we'd get a short delay without blowing out the remaining attempts. So we got this compromise.

comment:5 Changed 14 months ago by teor

(It could be argued, though, that it's better to have more load on the directory authorities rather than clients waiting for 10 seconds when the 3 fallbacks they choose are down. Someone else can work that one out!)

comment:6 in reply to:  4 Changed 14 months ago by teor

Replying to teor:

Replying to arma:

And it launches a parallel fetch to an authority too, just like it's supposed to:

May 25 20:58:07.185 [info] directory_send_command(): Downloading consensus from 154.35.175.225:443 using /tor/status-vote/current/consensus-microdesc/0232AF+14C131+23D15D+49015F+D586D1+E8A9C4+ED03BB+EFCBE7.z

No, that's a bug: #20604.

There's no need for every client to contact an authority: they didn't in 0.2.8, and they all worked fine. But when the exponential backoff code was merged, there was no provision for a set initial delay, and we couldn't define the schedules so that we'd get a short delay without blowing out the remaining attempts. So we got this compromise.

Sorry, I was wrong.

This happened because we never reset download statuses before we use them. See #17750.

comment:7 Changed 14 months ago by arma

Description: modified (diff)

(adding back the change to the ticket body that teor accidentally reverted above)

comment:8 Changed 14 months ago by asn

Cc: asn added

comment:9 Changed 14 months ago by s7r

Cc: s7r added

comment:10 Changed 13 months ago by nickm

Diagnosis: The "Loaded an expired consensus. Discarding" message is a red herring; that is talking about a cached unverified consensus, as evidenced by the next line, "Couldn't load unverified consensus microdesc networkstatus from cache."

What's actually going on here is that we _are_ accepting a slightly old consensus. The expiration check is:

  if (from_cache && !accept_obsolete &&
      c->valid_until < now-OLD_ROUTER_DESC_MAX_AGE) {
    log_info(LD_DIR, "Loaded an expired consensus. Discarding.");
    goto done;
  }

with the relevant constant defined as

#define OLD_ROUTER_DESC_MAX_AGE (60*60*24*5)

So, one bug here is that a really old unverified consensus got left around. We have a ticket for that from 2011 (#4187)!

One other bug is that we're expanding our guard sample even though the consensus is a few days out of date.

comment:11 Changed 13 months ago by nickm

Owner: set to nickm
Status: newaccepted

comment:12 Changed 13 months ago by nickm

Keywords: tor-client gor-guard 030-backport added
Points: 3
Status: acceptedneeds_review

So, the main root bug here is fixed in my branch bug22400_030_01. (I'll let #4187 speak for itself.)

I have a corresponding spec change in branch bug22400_01 in torspec.

comment:13 Changed 13 months ago by nickm

Keywords: tor-guard added; gor-guard removed

comment:14 Changed 13 months ago by asn

Reviewer: asn

comment:15 Changed 13 months ago by asn

Patch looks reasonable for what it's trying to do, but I think we now have two very similar logics: the one we just added, and the one that already exists in sampled_guards_update_from_consensus():

  // It's important to use only a live consensus here; we don't want to
  // make changes based on anything expired or old.
  if (gs->type != GS_TYPE_BRIDGE) {
    networkstatus_t *ns = networkstatus_get_live_consensus(approx_time());

    if (! ns) {
      log_info(LD_GUARD, "No live consensus; can't update "
               "sampled entry guards.");
      return;
    } else {
      log_info(LD_GUARD, "Updating sampled guard status based on received "
               "consensus.");
    }
  }

Do we need both checks? If yes, can we functionify them?

comment:16 Changed 13 months ago by nickm

I think we do need both checks; without a live consensus, we don't want to expand the list _or_ change its members' status.

The checks are slightly different; for expand it was "reasonably live" and for change status it's "live". I think that maybe they should both become "live"; let's try that.

What do you think of the commit I just added to the branch?

comment:17 in reply to:  16 Changed 13 months ago by asn

Status: needs_reviewmerge_ready

Replying to nickm:

I think we do need both checks; without a live consensus, we don't want to expand the list _or_ change its members' status.

The checks are slightly different; for expand it was "reasonably live" and for change status it's "live". I think that maybe they should both become "live"; let's try that.

What do you think of the commit I just added to the branch?

Makes sense. I like it more this way and less code dup.

I tested it with an old consensus from collector, and tor correctly declined to add any guards to the sampled set before fetching a new consensus.

Marking this branch as merge_ready.

comment:18 Changed 13 months ago by nickm

Status: merge_readyaccepted

Thank you, asn! Merging to 0.3.0 and forward.

I'm putting this back in "accept" since I believe Roger has more to add.

comment:19 Changed 12 months ago by cypherpunks

Last edited 12 months ago by cypherpunks (previous) (diff)

comment:20 in reply to:  19 Changed 12 months ago by nickm

Replying to cypherpunks:

"reasonably live" (under 1 day old)

No. "Reasonably live" is about valid_after <= now <= valid_until. 3 hours range by default.

Are you quite sure? See networkstatus_valid_until_is_reasonably_live in networkstatus.c

comment:21 Changed 12 months ago by cypherpunks

Last edited 12 months ago by cypherpunks (previous) (diff)

comment:22 Changed 11 months ago by nickm

Cc: arma added
Milestone: Tor: 0.3.1.x-finalTor: 0.3.3.x-final
Status: acceptedneeds_information

Roger, did you want to say more here? If not we should just close this.

comment:23 Changed 8 months ago by catalyst

Cc: catalyst added
Keywords: bootstrap added

comment:24 Changed 8 months ago by catalyst

Keywords: s8-errors added
Sponsor: Sponsor8-can

comment:25 Changed 5 months ago by nickm

Keywords: 030-backport removed

Remove 030-backport from all open tickets that have it: 0.3.0 is now deprecated.

comment:26 Changed 5 months ago by asn

Resolution: fixed
Status: needs_informationclosed

Seems like this is done now. Closing.

Note: See TracTickets for help on using tickets.