Opened 6 years ago

Last modified 3 years ago

#14310 reopened task

Standard on anonymized browser behaviour

Reported by: cypherpunks Owned by: tbb-team
Priority: Medium Milestone:
Component: Applications/Tor Browser Version:
Severity: Normal Keywords:
Cc: Actual Points:
Parent ID: Points:
Reviewer: Sponsor:


Hello. There is no dobt that browsers are fingerprintable. In my study (ticket #13400 ,article in Russian, wait for english version a little) I and other commenters have fingerprinted differrent browsers on differrent systems (including tbb on win8, winxp, some linux distro, Tails and Whonix) with differrent fonts and the fingerprints were differrent on differrent systems.

I think that we need a standard of browser behaviour which must

  • define a set of locally or remotely fingerprintable features (including values returned by differrent APIs, values can be measured (for example performance), behavoural patterns such as set of headers, etc...));
  • define their values for any anonymized browser which comply with standard.

Also we need a standard on user fingerprinting mitigation to make it not to conflict browser fingerprinting prevention.

Child Tickets

Change History (5)

comment:1 Changed 6 years ago by gk

Resolution: implemented
Status: newclosed

You are in luck as there is already a design document that should standardize these things

And for #13400, see #13313 as a possible solution we are working on.

comment:2 Changed 6 years ago by cypherpunks

Resolution: implemented
Status: closedreopened

It is not a standard, it is very fuzzy recommendations. I have already read #13313, it is strongly related to this ticket. As I have said, the standard must define how *exactly* browser must behave.
For example,"character 'A' of font 'serif' MUST have width ... and height ...", "anonymized browser MUST return headers .... in mentioned order", "following fingerprinter must return ... when executed in anonymized browser". We need a spec which will be used by anonymized browsers' authors and which can be converted into fully automated test. If the browser comply the standard it is probably hard to derive from the information observable by an attacker wheither we use anonymized browser A or anonymized browser B and wheither we use anonymized browser A in environment A or in environment B.

Last edited 6 years ago by cypherpunks (previous) (diff)

comment:3 in reply to:  2 Changed 6 years ago by gk

Keywords: browser-fingerprinting standard removed
Priority: criticalnormal
Type: defecttask

Replying to cypherpunks:

It is not a standard, it is very fuzzy recommendations. I have already read #13313, it is strongly related to this ticket. As I have said, the standard must define how *exactly* browser must behave.

What if there is more than one way to reach a goal or what if it is not clear yet which way is the best to reach it? See e.g. which takes a different approach to fingerprinting than we do.

comment:4 Changed 6 years ago by cypherpunks

Thank you, very interesting article (especially interesting the list of existing fingerprinters which actually were using the same technology that I have used in my study), but the approach of mitigation mentioned there is not very useful.

I have already thought about randomization and found it useless. Because an adversary can easily avoid this, using repeating. They have proposed to create a cache of random values, but the cache can be bypassed. For example in common canvas fingerprinting technique when we hash the image, the browser can add a random noise to image and cache it. What can we do? We can generate a different image and use only part of it. Because the image is different, there will be created a different random noise, which most likely won't overlap the generated noise, so it can be discarded by repeating.

In the case of fonts we can run a fingerprinter from different domains of one service. Because every domain means that new randomization will be applyed, we will be able to discard randomness and get steady fingerprint.

Also we can use this cache to crash the browser, creating a lot of fingerprintable features and making the browser to add randomness to them and cache it.

Also this random cache can be used as identifier.

But one of the ways is to create randomness on start of new anonymous session and use the same randomness for all the site. So an adversary will be able to track you between sites, but only limited by one session. Of course, randomness must be generated by a secure PRG.

So I think that the standard should use both ways.
All the features can be made equal in all the an. browsers must be equal for all the an. browsers.
Another set of features (we are not ready to make equal) must be randomized in a way I have described.

comment:5 Changed 3 years ago by teor

Severity: Normal

Set all open tickets without a severity to "Normal"

Note: See TracTickets for help on using tickets.