Tor software security issue response policy
(This policy was provisionally adopted by the Network Team on 31 March 2020. It will become non-provisional on August 1 2020.)
Preamble
In the past, our security policy has been driven by dedication to our users' well-being, but our incident-response decisions have for the most part been made under stress and pressure. So let's step back and try to say what we'd like our decisions to be. This doesn't reflect a change in our policies, but an attempt to specify them in advance, and commit to them in advance, to speed up our decision-making.
The golden rule:
When in doubt, protect users.
Notwithstanding anything in the rest of this document, we should not follow this policy if doing would be harmful to users.
Meta-issues
I am deeply suspicious; How should I read this document?
Please read it as having been written by and for honest people of good intent. It is not intended to be sufficient on its own to guarantee good outcomes from people more determined to comply with its letter than its spirit. Rather, it is intended to reflect a consensus on the right way to do things, help people of good intent make the right decisions under pressure, and let users know what to expect.
What does this document apply to?
"This document was written to apply to Tor. Other projects under the Tor umbrella may choose to adopt it, or may have their own security policies."
The process
What should I do if I find a security flaw in something Tor makes?
First, try to make sure it's a real security flaw; see section 3 below. If the flaw is low-severity or already public, you can just report it on the bugtracker at https://trac.torproject.org/
If you find a security flaw in a Tor product that has adopted these guidelines, and the flaw is neither low-severity nor already public, please use private email to tell the list below.
tor-security@lists.torproject.org
You can either send your email unencrypted, or encrypt it to the following PGP key:
pub 4096R/E135A8B41A7BF184 2017-03-13
Key fingerprint = 8B90 4624 C5A2 8654 E453 9BC2 E135 A8B4 1A7B F184
uid tor-security@lists.torproject.org <tor-security@lists.torproject.org>
uid tor-security@lists.torproject.org <tor-security-request@lists.torproject.org>
uid tor-security@lists.torproject.org <tor-security-owner@lists.torproject.org>
Please don't rely on twitter direct messages, online chat, blog comments, postal mail, messages in bottles, or notes wrapped around bricks: anything that doesn't get sent to tor-security@lists.torproject.org is at higher risk of being missed, misclassified, misevaluated, or misfiled.
You might also be interested in our HackerOne program.
How will Tor handle security issues?
First, we will assess whether an issue is already public, and we'll try to classify it as "research," "low-severity," "medium-severity," "high-severity", or "critical-severity".
For all public issues, we will do our work on them in public, on our bugtracker and other forums. When an issue is public, there's no point trying to keep it confidential. (For the purposes of this document, we're considering an issue 'public' if it's already well-enough known that working on it in public will not make it easier for attackers to put our users at risk.)
For research issues, we will try to collect and document them in public, encourage researchers to work on them, and when possible develop defenses for them in public.
All low-severity issues will be discussed on the bugtracker, and fixed in the development branch when possible. Like other small bugfixes, these fixes will be backported to the extent that doing so seems likely to fix more problems than it causes.
For medium-severity non-public issues, we will keep them private, and try to batch up several fixes at once. When we do fix them, we will apply the patches to at least the latest development and stable branches, and may backport to supported stable releases. We will announce that such an issue is being fixed in advance of the patch release date; we will announce details when the patch is released.
For high-severity and critical-severity issues not already publicly disclosed or being exploited, we will fix them in all affected releases, all at once, as soon as we can. We will notify the world that such a bug exists in advance of the patch, and we will release the patch once we believe it works.
At our discretion, we will work for packagers on high-impact platforms to ensure that they have packages ready when the issue is disclosed.
For all non-disclosed issues, we will create an empty trac ticket and allocate a TROVE entry to track the existence of the non-disclosed bug.
Additionally, flaws that harm users in the wild will usually be made public. Specifically, if we have reason to believe that an attack is being (or might be being) actively used to harm our users, we will sometimes inform people about the scope and extent of the attack as we become aware of it, even if we don't have a fix yet. In deciding whether to do so, we will act so as to maximize user safety.
How will we assess the severity of a security issue?
We'll try to classify security issues as "research," "low-severity," "medium-severity," "high-severity", or "critical-severity". We may also classify an issue as "upstream." See the next section for more information on these classifications.
Some issues arise because of unanswered research questions, not because of bugs in the Tor software. These include:
- End-to-end traffic correlation by an adversary who can observe both ends of the Tor circuit channel.
- Profiling attacks by waiting for a long time to become somebody's guard.
- Website- and traffic-type fingerprinting attacks.
In general, if no best or implementable solution is known for a given issue, it should be treated as a research problem rather than a security bug. These issues matter, and sometimes matter as deeply as any high-severity issue, but we shouldn't keep them private. Instead we will engage the research community for help solving them.
Here are some things that typically count as low-severity security issues, or not as security issues at all:
- A program can be made to crash or work incorrectly, but the program's user is the only one who can cause this.
- An attack is possible by a class of attacker that Tor does not attempt to defend against. (For example, Tor assumes that the attacker does not have administrator access to your computer, has not installed a keylogger, does not control a majority of directory authorities, cannot make an authenticated connection to the control port, and so on.)
- An attack is possible when the user ignores the advice on our download page.
- When users go to the wrong onion service address, they get the wrong onion service.
- When users ignore clear certificate warnings from the browser, they are vulnerable to MITM from an exit node.
- When users ignore a warning that doing something will make Tor less secure, Tor becomes less secure.
- At significant expense or effort, an attacker can cause a denial of service to a relay.
- Timing side-channel attacks are present, but can only be exploited at great difficulty, and only by local users.
- A bug affects only unsupported versions of Tor. (See our release timeline for a list of which versions are supported. )
These are typically low-severity issues:
- A defense-in-depth mechanism provides less defense-in-depth than it should (with no known corresponding attack enabled). For example, if sensitive material remains on the stack or heap without getting memwipe'd, but there is no means to exfiltrate it, it is typically low-severity.
- Undefined behavior is invoked, but not in a way that actually causes undesirable behavior when interpreted by any compiler we support.
Anything in these category is typically a medium-severity issue:
- Any remote crash or denial-of-service attack that does not affect clients or onion services, only relays. (This includes unfreed memory and other resource exhaustion attacks that can lead to denial-of-service.)
- Security bugs affecting configurations which almost nobody uses.
- Timing side-channel attacks that can be observed remotely, but only at great difficulty.
- Security bugs that require local (non privileged) access to your computer to exploit.
- Security bugs that only affect highly-unused platforms (like Irix, Windows 98, Linux 1.3, etc).
- Security bugs that make client path selection different to what is documented (for example: a bug that makes related relays more likely to be selected in the same path)
Anything in these categories is typically a high-severity issue, unless it is classified as lower severity because of one of the definitions above:
- Any bug that can remotely cause clients to de-anonymize themselves.
- Any remote crash attack against onion services. (This includes unfreed memory and other resource exhaustion attacks that can lead to denial-of-service.)
- Any memory-disclosure vulnerability.
- Any bug that allows impersonation of a relay. (If someone accesses a relay's keys, and it's not due to a bug in tor, we deal with that through the bad-relays process.)
- Any bug that lets non-exit relays get at user plaintext.
- Any privilege escalation from a Tor user to the higher-privileged user that started the Tor process. (For example, if Tor is started by root and told to drop privileges with the User flag, any ability to regain root privileges would be high-severity.)
Anything in these categories is typically a critical-severity issue, unless it is classified as lower severity because of one of the definitions above:
- Any remote code-execution vulnerability.
Some bugs affect Tor, but are upstream bugs, not bugs in Tor itself: they include bugs in external libraries, like OpenSSL, Libevent, or zlib; bugs in an operating system's kernel; or bugs in upstream Firefox affecting TorBrowser. When we become aware of an "upstream" issue like this, we will coordinate with the upstream developers to find and deploy an appropriate fix, in accordance with their own security processes.
Finally, the above categories are approximation only; difficulty of exploitation, degree of impact, rarity of configuration, and other public factors may increase or decrease the security of an issue.
Can I show you my research findings in advance of disclosure? Or will you spoil my conference talk?
We'd rather have a fix today than a fix at the conference; but we'd rather have a fix that goes live the day of your talk than a fix that won't be ready till a week later.
Therefore, if you want to coordinate our disclosure to correspond with yours, we can arrange that, assuming that the interval during which you want us to keep the issue private (that is, to "embargo" it) is not longer than 60 days.
We will not embargo an issue under any of the following circumstances:
- The issue becomes public through other means, or somebody else finds it.
- We have reason to believe that the issue is being exploited against real users in the wild.
If one of these situations does occur, we will make an effort to alert you to the situation, and will coordinate with you to acknowledge your research and help: but our first priority will be to our users.
Who will find out about non-public flaws? When?
The core developers of the affected software component, and the research director (Roger) will find out immediately. We will work together with the bug reporter to identify and validate a solution.
We may enlist packagers, researchers and testers as needed to ensure that our fix is correct, and complete, and doesn't break anything else.
All other packagers will learn that there is an upcoming release that will fix a security flaw, and will learn the flaw's general severity, but won't learn specific information.
(And of course, everybody will learn about the flaw when it becomes public.)
Will you attribute bug reports and fixes?
Yes, unless you ask us not to.
How will you publicize issues and fixes?
All high-severity issues will get tweeted and blogged and emailed to tor-announcements. All issues will be discussed in the changelog. When appropriate, the changelog will feature a prominent sentence saying who should update.
Packagers (listed in doc/HACKING/ReleasingTor.md) will get personal email.
We will look into more ways to announce and publicize fixes and issues.
Will you get CVEs?
Yes, for all high- and critical-severity issues.
You can find a list of all issues in our own TROVE registry, here on our wiki.
Secondary contact info
If for some reason the tor-security mailing list doesn't work for you, or you want to make initial contact in an encrypted way, please use the PGP keys here to encrypt your description of the issue, sending it to the appropriate project leads.
If you do not get a response, however, please try the tor-security mailing list above: do not assume that your email got through!
-
Tor:
-
Nick Mathewson <nickm@torproject.org>
2133 BC60 0AB1 33E1 D826 D173 FE43 009C 4607 B1FB
-
-
Tor browser
-
Georg Koppen <gk@torproject.org>
35CD 74C2 4A9B 15A1 9E1A 81A1 9437 3AA9 4B7C 3223
-
-
Research director
-
Roger Dingledine <arma@torproject.org>
F65C E37F 04BA 5B36 0AE6 EE17 C218 5258 19F7 8451
-
Acknowledgments
Thanks to everyone who offered helpful suggestions on earlier drafts, including Arlo Breault, Cass Brewer, David Goulet, George Kadianakis, Georg Koppen, Kate Krauss, Lunar, and Tom Ritter.
Thanks to everyone over the years who has patiently explained a security problem that we didn't understand to us until we finally understood what they were talking about.