Tor Client Measurement

  • web server logs
  • browser feature usage
  • hidden service usage

Tor Relay Measurement

  • onion service usage
  • Tor abuse traffic

-- how can we notice, measure, and respond? (noting the difficulty of defining "abuse") -- botnet measurement -- ssh bruteforcing, comment spam, copyright infringement

What is acceptable to measure at exits?

  • latency
  • packet loss
  • relay load

-- link speed -- CPU -- bandwidth

  • path length/reuse

-- infer on relays

  • circuit construction latency
  • onion services

-- how many are being used? -- churn? -- upload vs download ratios? -- popularity? -- data up & down -- front domains vs unique content?

  • exit cache misses vs cache hits
  • which resolvers are being used at exits
  • bridge statistics

-- traffic -- pluggable transports -- countries -- bridgeDB --- distributors --- transports


  • mobile vs desktop
  • client versions
  • via transport method

-- bridge, pluggable transport

  • idle vs active

Internal abuses

  • infinite length circuits
  • onion DOS
  • Tor over Tor


  • circuit creation failure rate
  • DNS latency

Client failures

  • end to end connection latecy
  • setup latency
  • preemptive vs new 3-hop circuits
  • clients/browsers vs hidden services
  • cell queue time

Traceroute between relays

  • milliseconds
  • min/max hops
  • unique 1st, 2d. hops

--- How do we implement this

Process: no defined project onboarding process

  1. external research
  2. distribution

Internal discussion, implementation

Tor umbrella membership categories: supported, distributed, encouraged, banned

Client measurements HS measurements DPI IP addresses, classification

How long is it OK to store things in memory

--- takeaways: Project onboarding Client/HS versions or features Browser metrics v feature usage

Shutting things off when they are rare could enable an attack (small anonymity set)

Last modified 3 years ago Last modified on Sep 29, 2016, 2:02:15 AM