Compare variance between simulation runs (Shadow and Experimentor)
View options
- Truncate descriptions
In #4086 (moved) we noticed that a given simulation can have quite a bit of variance depending on the topology selected, and what paths clients pick.
How much exactly, and what can we do to quantify it and to reduce it?
The suggestion so far is to do k runs, and then either add them all up as data points that go into the CDF, or look at the CDFs separately and see how different they are from one run to the next.
- Show labels
- Show closed items