Takeaway: We're almost ready to publish the draft and declare it operational.
Clarify all discretion issues, including who is "packagers" and when they get advance notice.
("If we think a delay would probably cause users....")
How far back do we support?
- "Whatever goes into debian stable"....
Assume that bugs discussed on public IRC or on Trac are fully public?
"Putting out a release has a cost. Keeping bugs private has a cost."
"We will not credit people who do not disclose."
TVE. (Tor Vulnerabilities and Exposures)
Working document: "Draft tor software security issue response policy, v3"
[This is a draft. If I declare something stupid here, please help me fix it! -Nick]
Tor software security issue response policy [draft]
In the past, our security policy has been driven by dedication to our users' well-being, but our incident-response decisions have for the most part been made under stress and pressure. So let's step back and try to say what we'd like our decisions to be. This doesn't reflect a change in our policies, but an attempt to specify them in advance, and commit to them in advance, to speed up our decision-making.
The golden rule:
When in doubt, protect users.
-1. I am deeply suspicious; How should I read this document?
Please read it as having been written by and for honest people of good intent. It is not intended to be sufficient on its own to guarantee good outcomes from people more determined to comply with its letter than its spirit. Rather, it is intended to reflect a consensus on the right way to do things, help people of good intent make the right decisions under pressure, and let users know what to expect.
-
What does this document apply to?
Nothing, right now. It's a draft.
But when it's not a draft, this section will say,
"This document was written to apply to Tor. Other projects under the Tor umbrella may choose to adopt it, or may have their own security policies."
-
What should I do if I find a security flaw in something Tor makes?
First, try to make sure it's a real security flaw; see section 3 below. If the flaw is low-severity or already public, you can just report it on the bugtracker at https://trac.torproject.org/
If you find a security flaw in a Tor product that has adopted these guidelines, and the flaw is neither low-severity nor already public, please use private email to tell the list below.
We hope to have a PGP key set up for this list soon; but until we do, you can use unencrypted email to tor-security to make initial contact, and figure out whose PGP keys you should use to communicate about the issue in more detail. See section A below for some initial suggestions.
Please don't rely on twitter direct messages, online chat, blog comments, postal mail, messages in bottles, or notes wrapped around bricks: anything that doesn't get sent to tor-security@ is at higher risk of being missed, misclassified, misevaluated, or misfiled.
[TODO: provide a non-PGP mechanism for secure issue reporting.]
You might also be interested in our HackerOne program.
-
How will Tor handle security issues?
First, we will assess whether an issue is already public, and we'll try to classify it as "research," "low-severity," "medium-severity," or "high-severity."
For all public issues, we will do our work on them in public, on our bugtracker and other forums. When an issue is public, there's no point trying to keep it confidential. (For the purposes of this document, we're considering an issue 'public' if it's already well-enough known that working on it in public will not make it easier for attackers to put our users at risk.)
For research issues, we will try to collect and document them in public, encourage researchers to work on them, and when possible develop defenses for them in public.
All low-severity issues will be discussed on the bugtracker, and fixed in the development branch when possible. Like other small bugfixes, these fixes will be backported to the extent that doing so seems likely to fix more problems than it causes.
For medium-severity non-public issues, we will keep them private, and try to batch up several fixes at once. When we do fix them, we will apply the patches to at least the latest development and stable branches, and may backport to supported stable releases. We will announce that such an issue is being fixed in advance of the patch release date; we will announce details when the patch is released.
For high-severity issues not already publicly disclosed or being exploited, we will fix them in all affected releases, all at once, as soon as we can. We will notify the world that such a bug exists in advance of the patch, and we will release the patch once we believe it works.
At our discretion, we will work for packagers on high-impact platforms to ensure that they have packages ready when the issue is disclosed.
For all non-disclosed issues, we will create an empty trac ticket, or some other mechanism to track the existence of the non-disclosed bug.
Additionally, flaws that harm users in the wild will be made public. Specifically, if we have reason to believe that an attack is being (or might be being) actively used to harm our users, we will sometimes inform people about the scope and extent of the attack as we become aware of it, even if we don't have a fix yet. In deciding whether to do so, we will act so as to maximize user safety.
-
How will we assess the severity of a security issue?
We'll try to classify security issues as "research," "low-severity," "medium-severity," or "high-severity." We may also classify an issue as "upstream." See the next section for more information on these classifications.
Some issues arise because of unanswered "research questions," not because of bugs in the Tor software. These include:
-
End-to-end traffic correlation by an adversary who can observe both ends of the Tor circuit channel.
-
Profiling attacks by waiting for a long time to become somebody's guard.
-
Website- and traffic-type fingerprinting attacks.
In general, if no best or implementable solution is known for a given issue, it should be treated as a research problem rather than a security bug. These issues matter, and sometimes matter as deeply as any high-severity issue, but we shouldn't keep them private. Instead we will engage the research community for help solving them.
Here are some things that typically count as "low-severity" security issues, or not as security issues at all:
-
A program can be made to crash or work incorrectly, but the program's user is the only one who can cause this.
-
An attack is possible by a class of attacker that Tor does not attempt to defend against. (For example, Tor assumes that the attacker does not have administrator access to your computer, has not installed a keylogger, does not control a majority of directory authorities, cannot make an authenticated connection to the control port, and so on.)
-
An attack is possible when the user ignores the advice on our download page.
-
When users go to the wrong hidden service address, they get the wrong hidden service.
-
When users ignore clear certificate warnings from the browser, they are vulnerable to MITM from an exit node.
-
When users ignore a warning that doing something will make Tor less secure, Tor becomes less secure.
-
At significant expense or effort, an attacker can cause a denial of service to a relay.
-
Timing side-channel attacks are present, but can only be exploited at great difficulty, and only by local users.
-
These are typically "low-severity" issues:
* A defense-in-depth mechanism provides less defense-in-depth than
it should (with no known corresponding attack enabled). For
example, if sensitive material remains on the stack or heap
without getting memwipe'd, but there is no means to exfiltrate
it, it is typically low-severity.
* Undefined behavior is invoked, but not in a way that actually
causes undesirable behavior when interpreted by any compiler we
support.
Anything in this category is typically a medium-severity issue:
* Any remote crash or denial-of-service attack that does not
affect clients or hidden services, only relays. (This includes
unfreed memory and other resource exhaustion attacks that can
lead to denial-of-service.)
* Security bugs affecting configurations which almost nobody
uses.
* Timing side-channel attacks that can be observed remotely, but
only at great difficulty.
* Security bugs that require local (non privileged) access to
your computer to exploit.
* Security bugs that only affect highly-unused platforms (like
Irix, Windows 98, Linux 1.3, etc).
Anything in this category is typically a "high-severity" issue, unless it is classified as lower severity because of one of the definitions above:
* Any means to remotely cause clients to de-anonymize themselves.
* Any remote code-execution vulnerability.
* Any remote crash attack against hidden services. (This includes
unfreed memory and other resource exhaustion attacks that can
lead to denial-of-service.)
* Any memory-disclosure vulnerability.
* Any means to impersonate a relay.
* Any way for non-exit relays to get at user plaintext.
* Any privilege escalation from a Tor user to the higher-privileged
user that started the Tor process. (For example, if Tor is
started by root and told to drop privileges with the User flag,
any ability to regain root privileges would be high-severity.)
Some bugs affect Tor, but are not bugs in Tor itself: they include bugs in external libraries, like OpenSSL, Libevent, or zlib; bugs in an operating system's kernel; or bugs in upstream Firefox affecting TorBrowser. When we become aware of an "upstream" issue like this, we will coordinate with the upstream developers to find and deploy an appropriate fix, in accordance with their own security processes.
Finally, the above categories are approximation only; difficulty of exploitation and other public factors may increase or decrease the security of an issue.
[TODO: Compare the above list with the categories in our bug
bounty program.]
-
Can I show you my research findings in advance of disclosure? Or will you spoil my conference talk?
We'd rather have a fix today than a fix at the conference; but we'd rather have a fix that goes live the day of your talk than a fix that won't be ready till a week later.
Therefore, if you want to coordinate our disclosure to correspond with yours, we can arrange that, assuming that the interval during which you want us to keep the issue private (that is, to "embargo" it) is not longer than 60 days.
We will not embargo an issue under any of the following circumstances:
-
The issue becomes public through other means, or somebody else finds it.
-
We have reason to believe that the issue is being exploited against real users in the wild.
If one of these situations does occur, we will make an effort to alert you to the situation, and will coordinate with you to acknowledge your research and help: but our first priority will be to our users.
-
-
Who will find out about non-public flaws? When?
The core developers of the affected software component, and the research director (Roger) will find out immediately. We will work together with the bug reporter to identify and validate a solution.
We may enlist packagers, researchers and testers as needed to ensure that our fix is correct, and complete, and doesn't break anything else.
All other packagers will learn that there is an upcoming release that will fix a security flaw, and will learn the flaw's general severity, but won't learn specific information.
(And of course, everybody will learn about the flaw when it becomes public.)
-
More Questions and answers.
Q: Will you attribute bug reports and fixes?
A: Yes.
Q: How will you publicize issues and fixes?
A: All high-severity issues will get tweeted and blogged and emailed to tor-announcements. All issues will be discussed in the changelog. When appropriate, the changelog will feature a prominent sentence saying who should update.
Packagers will get personal email.
We will look into more ways to announce and publicize fixes and
issues.
Q: Will you get CVEs?
A: Dunno.
A. Secondary contact info
If for some reason the tor-security mailing list doesn't work for you, or you want to make initial contact in an encrypted way, please use the PGP keys here to encrypt your description of the issue, sending it to the appropriate project leads.
If you do not get a response, however, please try the tor-security mailing list above: do not assume that your email got through!
Tor:
Nick Mathewson <nickm@torproject.org>
B35B F85B F194 89D0 4E28 C33C 2119 4EBB 1657 33EA
Tor browser:
Georg Koppen <gk@torproject.org>
35CD 74C2 4A9B 15A1 9E1A 81A1 9437 3AA9 4B7C 3223
Research director:
Roger Dingledine <arma@torproject.org>
F65C E37F 04BA 5B36 0AE6 EE17 C218 5258 19F7 8451
B. Acknowledgments
Thanks to everyone who offered helpful suggestions on earlier drafts, including Arlo Breault, Cass Brewer, David Goulet, George Kadianakis, Georg Koppen, Kate Krauss, Lunar, and Tom Ritter.
Thanks to everyone over the years who has patiently explained a security problem that we didn't understand to us until we finally understood what they were talking about.
C. Open issues
- How does this apply to censorship events?
-------- comments?