A lot of the stuff that we do in tor, we can't hide behind an interface.

we have a bias towards thinking of the user as a dissident in
  an oppressive or western regime.  We should think more about
  victims of domestic and partner abuse and other scenarios

Categories outside our perssonal experience:

 * Network environment

 * Social context

 * Economic context

 * Technical knowledge
    * Able to research online
    * Able to understand docs
    * Troubleshooting experience
       * Like googling errors
       * Like retrying automatically
    * Understanding privacy; privacy awareness
       * "Is it safe to google this."

 * Language and jargon

Sometimes users are trained by experience to treat software in certain ways
that don't work for ours.  ("Unconscious user habits")

Relationship with technology -- how do you feel when tor doesn't work

Sometimes people want to optimize a number even if they are told that it
doesn't matter.

     (psychological validation -- does your bw number make you happy to run a

     (Does the onion path look nice and private to you?)

torcheck served some of the above function for users.

For relay ops we should do better.  (Teor assigns self an action item here)

     For technical users -- lack of transparency.  Example: how to
     communicate cookie lifetimes in TB?  possibility: draw different
     "containers" differently.

Social context

   - how many people do you share it with?
   - Who else has login on your device?
   - who sees you use your computer?
   - How do your peers perceive tor?
   - "everybody here uses windows/android/chrome"
   - Am I safe with this app on your device?
   - Can people see my screen when I use my device?

Economic context:

   - device: how old?
   - what brand is device
   - do you own your device?
   - how many people do you share it with?
   - is bandwidth a problem for you?


What to do with this information:

* Make common user stories

* Can UX/Community team help?

   * Teor volunteers: check in with Antonela


Can we generalize principles about how much dysfunction we tolerate in our app?

Can we arrive at general design principles here?

Can we extend our metrics and tests to include more scenarios that we don't usually use tor in ourselves?
   * one example: CI that fails if bridge code fails
   * Testing with bad network conditions
   * CI that fails if startup with old state fails (make an archive of old states)
   * CI that measures perceived latency
   * Upgrade testing

Proposal: Open a ticket (or add a comment) when Tor doesn't Just Work for you.
   * when you need to set an option
   * when you need to delete your state file
   * when you need to select new guards
   * when you hit "new circuit"

Proposal: open tickets (or add a comment) based on all user/relay op requests.
   * after every one of Maggie's reports
   * after handling a user/operator request

Collect postmortems on mistakes we've made with messaging on user/operator safety in the past.
   * example: fallback directory automation debate

Last modified 21 months ago Last modified on Jan 30, 2019, 6:53:08 AM