Opened 4 weeks ago

Last modified 3 weeks ago

#31435 new project

Emulate different Fast/Guard cutoffs in historical consensuses

Reported by: irl Owned by: metrics-team
Priority: Medium Milestone:
Component: Metrics Version:
Severity: Normal Keywords:
Cc: Actual Points:
Parent ID: Points:
Reviewer: Sponsor:

Description

There are many things that we can tune in producing votes and consensuses that will affect the ways that clients use the network, that might result in better load balancing.

We need tools for simulating what happens when we make those changes, using data (either historical or live) for the public Tor network.

We can consider the MVP for this complete once we have a tool that allows us to take server descriptors and simulate votes and consensus generation using alternate Fast/Guard cutoffs.

Extensions to this would be allowing alternative consensus methods, or other tunables.

By reducing the cost of performing these simulations we can allow faster iteration on ideas that will hopefully allow for better user experience.

Child Tickets

TicketStatusOwnerSummaryComponent
#31434newmetrics-teamWrite ANTLR parsers for dir-spec descriptors and benchmarkMetrics/Library
#31436newmetrics-teamProvide a tunable Java implementation of vote generationMetrics/Analysis
#31437newmetrics-teamProvide a tunable Java implementation of consensus generationMetrics/Analysis
#31438newmetrics-teamProvide a Java application to exclude "impossible" paths from OnionPerf results given an alternate consensusMetrics/Analysis

Change History (1)

comment:1 Changed 3 weeks ago by karsten

Fun stuff! I have given this project some thoughts and came up with a couple questions and suggestions:

  1. Is the scope of this project to run simulations using historical data only, or is the plan to also use modified consensuses as input for performing OnionPerf runs using changed Fast/Guard flag assignments? Both are conceivable, the latter is just more work.
  1. What's the purpose of generating modified vote documents in #31436? Is the idea to evaluate how parameter changes produce different flag assignments? If so, couldn't we start with a first version that outputs statistics on flag assignments (and similar characteristics), rather than writing code to generate votes? We could always add a vote exporter at a later point, but if the main results is the simulation, then we might not need the votes.
  1. Same as 2, but for consensuses. If the plan is to just run simulations (and not use consensuses as input for new OnionPerf measurements, cf. 1), then we might just keep relevant consensus information internally. Thinking about MVP here.
  1. It seems to me that faster parsers (#31434) would be an optimization, but not strictly necessary for the MVP. We might want to put that on the nice-to-have list and try to deliver the MVP without it.
  1. The approach to exclude "impossible" paths from existing OnionPerf measurements was also my initial idea when thinking about this topic a while back. But maybe we can do something better here. Either including or excluding a measurement only works if paths become impossible or remain possible, but it doesn't reflect whether paths become more or less likely. For example, if half of the Guard flags go away, and we look at a path including one of the remaining guards, that path would become more likely; and if a Stable flag goes away for a relay in the path, that path would become less likely. I wonder if we take an approach where we resample OnionPerf measurements by selecting k paths using old/new path selection probabilities as weight. We might want to consult somebody who has done such a thing before.

Maybe we can discuss this project some more and write down a project plan that starts with the MVP plus the possible extensions.

Note: See TracTickets for help on using tickets.