Opened 9 months ago

Last modified 5 months ago

#28424 new defect

Refactor hs_service_callback() to no longer need to run once per second?

Reported by: nickm Owned by:
Priority: Medium Milestone: Tor: unspecified
Component: Core Tor/Tor Version:
Severity: Normal Keywords: 040-deferred-201915
Cc: dgoulet, nickm, grote, asn Actual Points:
Parent ID: Points:
Reviewer: Sponsor:

Description (last modified by nickm)

This is maybe a lot of work, but it would help if we could make all the once-a-second portions of this function into things that we can turn off if we're dormant? I'm hoping for a feature (optional?) where a dormant onion service does the minimum work possible to keep itself online and wait for introductions.

This is a reach task; if we do the rest of the parent ticket, we'll be fine.

Child Tickets

Attachments (4)

ChurnSim.java (1.9 KB) - added by akwizgran 9 months ago.
ChurnSim.2.java (3.5 KB) - added by akwizgran 9 months ago.
Updated simulation code
hsdir-reachability-mean.png (4.9 KB) - added by akwizgran 9 months ago.
Mean lookup success rate
hsdir-reachability-first-percentile.png (5.5 KB) - added by akwizgran 9 months ago.
First percentile lookup success rate

Download all attachments as: .zip

Change History (14)

comment:1 Changed 9 months ago by nickm

Description: modified (diff)

comment:2 Changed 9 months ago by dgoulet

I've thought a bit about this. So the big problem is the hashring which require a fresh consensus in order to compute for each nodes the HSDir index using the SRV value. It doesn't change often (every 24h) but it is also about relay churn impact on the hashring thus it is done as much as we can.

The problem doesn't lie in computing those but rather if we are in a "dormant" state, we don't have a fresh consensus and thus change in SRV or/and churn will affect reachability up to the point of _not_ being reachable (because SRV changed).

We can refactor everything to instead being called every seconds would register a mainloop event in the time frame it needs to be woken up (for instance rotation time of descriptor).

But without a fresh consensus (not participating in the network), we'll run into reachability issues pretty fast.

comment:3 Changed 9 months ago by nickm

Parent ID: #28335

comment:4 Changed 9 months ago by akwizgran

If I'm reading the spec right, there are hsdir_n_replicas = 2 replicas. For each replica, the HS uploads the descriptor to hsdir_spread_store = 4 HSDirs at consecutive positions in the hashring. Each client tries to fetch the descriptor from one of the first hsdir_spread_fetch = 3 positions, chosen at random.

A lookup fails when the position chosen by the client is occupied by an HSDir that didn't receive the descriptor, for both replicas. So failure is possible when any of the first 3 positions is occupied by an HSDir that didn't receive the descriptor, for both replicas. Churn can bring this about in two ways: by removing the HSDirs that received the descriptor, and by adding new HSDirs that push the HSDirs that received the descriptor out of the first 3 positions.

How long do we expect it to take before churn makes a lookup failure possible? We could measure this with historical consensus data, but let's try a quick simulation first.

Figure 9 of this paper shows the fraction of relays with the HSDir flag that join or leave between consecutive consensuses. I'd estimate 0.01 by eye, so let's conservatively call it 0.02. The churn rate counts both joins and leaves, so a churn rate of 0.02 means each HSDir from the previous consensus has left with probability 0.01, and new HSDirs have joined at the same rate.

There are about 3,000 relays with the HSDir flag.

My code (attached) simulates each replica by creating 3,000 HSDirs, each at a random position on the hashring, and remembering the first 4 HSDirs on the hashring - these are the ones that receive copies of the descriptor. Churn is simulated an hour at a time. In each hour, each HSDir is removed with probability 0.01 and replaced with a new HSDir at a random position. Then the code checks whether the first 3 HSDirs on the hashring are all ones that received copies of the descriptor. If not, a lookup on this replica could fail.

For simplicity I've simulated the two replicas independently - in reality they'd be based on different permutations of the same HSDirs, but independence seems like a reasonable approximation. The simulation runs until lookups on both replicas could fail.

The mean time until both replicas could fail is 37 hours, averaged over 10,000 runs.

If this is roughly accurate then we should be able to keep the HS reachable by waking Tor from its dormant state every few hours to fetch a fresh consensus and upload new copies of the descriptor if necessary.

Perhaps I should extend the simulation to consider the probability of lookup failure as a function of time, rather than the mean time until failure becomes possible.

Changed 9 months ago by akwizgran

Attachment: ChurnSim.java added

comment:5 Changed 9 months ago by akwizgran

I updated the simulation to consider the probability of lookup failure as a function of time, and to use the same HSDirs in both hashrings. The new code is attached, along with graphs showing the mean and first percentile of the lookup success rate over 1000 independent runs.

The first percentile gives an idea of the experience of an unlucky hidden service. The success rate for an unlucky service falls below 90% after 5 hours, showing that the assessment in my last comment, based on mean time until possible failure, was too optimistic. Services do need to republish their descriptors frequently or there's a significant risk of lookup failure.

Changed 9 months ago by akwizgran

Attachment: ChurnSim.2.java added

Updated simulation code

Changed 9 months ago by akwizgran

Attachment: hsdir-reachability-mean.png added

Mean lookup success rate

Changed 9 months ago by akwizgran

First percentile lookup success rate

comment:6 Changed 9 months ago by akwizgran

At the risk of going off-topic, I just wanted to mention one other thing I noticed when running the simulations. The lookup success rate can be improved by changing the parameters without storing more copies of descriptors.

Reducing hsdir_spread_fetch, without changing the other parameters, improves the mean lookup success rate. Intuitively, if a replica is stored on dirs d_1...d_n at positions 1...n, churn is less likely to displace d_i from its position than d_{i+1} because there's less hashring distance into which churn could insert a new dir. Thus a client is more likely to find the descriptor at position i than position i+1 after churn. Reducing hsdir_spread_fetch concentrates more of the client's lookups on positions that are more likely to hold the descriptor.

However, while the mean lookup success rate is improved, the effect on the first percentile is more nuanced. The success rate remains at 100% for longer, but then falls faster. I think this is because each lookup chooses from a smaller set of positions to query, so there's a smaller set of possible outcomes. When hsdir_spread_fetch == 1, all lookups query the same positions.

This is relevant in the real world if clients can retry failed lookups. We might accept a lower mean success rate if clients can have another chance at success by retrying.

Another interesting possibility is to trade a larger hsdir_n_replicas for a smaller hsdir_spread_store. In other words, use more replicas and store fewer copies at each replica. This improves the lookup success rate (both mean and first percentile), without increasing the number of copies of the descriptor that are stored.

However, I'm not sure how increasing hsdir_n_replicas would affect query bandwidth. Do clients try replicas in parallel or series? If parallel, increasing hsdir_n_replicas would increase bandwidth for all queries. If series, the worst-case bandwidth would increase (a query would visit more replicas before failing).

comment:7 Changed 9 months ago by grote

Cc: grote added

comment:8 Changed 7 months ago by gaba

Sponsor: Sponsor8-can

comment:9 Changed 7 months ago by nickm

Keywords: 040-deferred-201915 added
Milestone: Tor: 0.4.0.x-finalTor: unspecified

Deferring some tickets from 0.4.0 without proposing them for later. Please tag with 041-proposed if you want to do them.

comment:10 Changed 5 months ago by asn

Cc: asn added
Note: See TracTickets for help on using tickets.