Opened 9 years ago

Closed 8 years ago

#2686 closed task (fixed)

Verify that non-standard Torperfs on ferrinii are working correctly

Reported by: karsten Owned by: karsten
Priority: High Milestone:
Component: Metrics/Analysis Version:
Severity: Keywords:
Cc: Actual Points:
Parent ID: #2769 Points: 2
Reviewer: Sponsor:

Description

This is a follow-up ticket of #2545. We should keep track of the 15 Torperfs on ferrinii to detect whether our recent changes to Torperf broke something.

Child Tickets

Change History (14)

comment:1 Changed 9 years ago by karsten

Status: newassigned

I'll take care of this.

comment:2 Changed 9 years ago by karsten

Component: MetricsTorperf
Points: 2
Priority: normalmajor

Setting priority to major, because we're changing Torperf a lot these days. We need to ensure regularly that our #1919 runs don't break.

comment:3 Changed 9 years ago by mikeperry

Parent ID: #2769

comment:4 Changed 8 years ago by karsten

Summary: Verify that #1919 Torperfs are working correctlyVerify that non-standard Torperfs on ferrinii are working correctly

I'm changing the ticket summary to include both the different custom guard node selections and the circuit build timeout variations.

The non-standard Torperfs look good since our last change on April 11 at around 11:30 UTC. How many days do we need? Is one week sufficient?

The next step, possibly overlapping with this one, will be to make useful graphs of the data we have.

comment:5 Changed 8 years ago by karsten

We now have one week of data that looks sane (April 11--18). I already started making graphs from these data, and updating these graphs for a few more days of data is a PITA. So, I'm not going to use whatever data we collect now.

Mike, Roger, unless there's a reason for keeping the

  • Torperfs with custom guard node selections and the
  • Torperfs with circuit build timeout cutoffs != 80

running, I'm going to stop them tomorrow. These are 36 Torperfs in total. We can always start new Torperfs in whatever configurations we deem useful.

I'll make the data available on the metrics website. Of course, I'll keep the three Torperfs that we use on the metrics website running (default guard nodes and CBT cutoff of 80).

comment:6 Changed 8 years ago by mikeperry

http://freehaven.net/~karsten/volatile/cbt-cutoff-2011-04-18.png

I think these results strongly suggest that 70 is a more optimal cutoff than 80 and that there are diminishing returns after that. If we are not comfortable with this cutoff, we should redo this experiment with 60, 70, 75, 80 and 99 and see if 75 still provides significant benefit. But we need to have all of these runs going concurrently, especially since #2704 is messing up the network.

comment:7 in reply to:  5 Changed 8 years ago by arma

Replying to karsten:

Mike, Roger, unless there's a reason for keeping the

I haven't been following this ticket well enough to know. So if Mike is fine with the action, go for it.

comment:8 Changed 8 years ago by karsten

http://freehaven.net/~karsten/volatile/cbt-cutoff-2011-04-20.png

This graph shows the influence of a) guard nodes and b) CBT cutoffs on worst case performance. Using a CBT cutoff < 99 is probably useful, but it seems that the influence of slow guards is even higher. Maybe we shouldn't aim too much for changing the cutoff to 75, but think harder about excluding slow guards.

comment:9 Changed 8 years ago by mikeperry

What I see in that graph montage (very nice btw :) looks like 70 is a pretty clear improvement in at least two cases: the 50kb default guard (every guard), and the fastest 1MB guard. It also looks like 70 is a marginal win on default 1MB, eliminating a bit of the tail in the last 5% or so of completion times.

I still think we should do it again, this time including 75 :)

I think the guard thing is a separate issue that we need to just crunch some existing data for a while to gain more insight..

comment:10 Changed 8 years ago by karsten

Just FYI, the current Torperfs with CBTs 50, 60, 70, 80, 99 are still running. Should I stop them?

I can reconfigure the Torperfs and restart them tomorrow. If you want me to do that, please let me know which configurations we want.

comment:11 Changed 8 years ago by mikeperry

Hrmm. I still think we should take a tarball of everything to keep as a backup just in case, but only kill 50 and turn it into a 75.

comment:12 Changed 8 years ago by karsten

The raw data of this experiment is available here. Note that some of the Torperf runs already ran before we started the experiment. Also, we need to cut off the last week or so, when you started changing consensus parameters. That leaves us with an interval from April 11, 12:00 UTC to April 21-ish. But we can still clean up the data if we decide we want to use them.

I also stopped the Torperf runs, changed the CBT-50 runs to CBT-75, and restarted them. Looks good so far. I'll look after them once more tomorrow. Please also have a look in the next few days.

comment:13 Changed 8 years ago by karsten

Component: TorperfAnalysis

comment:14 Changed 8 years ago by karsten

Resolution: fixed
Status: assignedclosed

The non-standard Torperfs on ferrinii aren't running for months. The data is available here. Closing.

Note: See TracTickets for help on using tickets.