Security Blog
The latest news and insights from Google on security and safety on the Internet
Rebooting Responsible Disclosure: a focus on protecting end users
20 July 2010
Posted by Chris Evans, Eric Grosse, Neel Mehta, Matt Moore, Tavis Ormandy, Julien Tinnes, Michal Zalewski; Google Security Team
Vulnerability disclosure policies have become a hot topic in recent years. Security researchers generally practice “responsible disclosure”, which involves privately notifying affected software vendors of vulnerabilities. The vendors then typically address the vulnerability at some later date, and the researcher reveals full details publicly at or after this time. A competing philosophy, "full disclosure", involves the researcher making full details of a vulnerability available to everybody simultaneously, giving no preferential treatment to any single party. The argument for responsible disclosure goes briefly thus: by giving the vendor the chance to patch the vulnerability before details are public, end users of the affected software are not put at undue risk, and are safer. Conversely, the argument for full disclosure proceeds: because a given bug may be under active exploitation, full disclosure enables immediate preventative action, and pressures vendors for fast fixes. Speedy fixes, in turn, make users safer by reducing the number of vulnerabilities available to attackers at any given time. Note that there's no particular consensus on which disclosure policy is safer for users. Although responsible disclosure is more common, we recommend this
2001 post by Bruce Schneier
as background reading on some of the advantages and disadvantages of both approaches. So, is the current take on responsible disclosure working to best protect end users in 2010? Not in all cases, no. The emotionally loaded name suggests that it is the most responsible way to conduct vulnerability research - but if we define being responsible as doing whatever it best takes to make end users safer, we will find a disconnect. We’ve seen an increase in vendors invoking the principles of “responsible” disclosure to delay fixing vulnerabilities indefinitely, sometimes for years; in that timeframe, these flaws are often rediscovered and used by rogue parties using the same tools and methodologies used by ethical researchers. The important implication of referring to this process as "responsible" is that researchers who do not comply are seen as behaving improperly. However, the inverse situation is often true: it can be irresponsible to permit a flaw to remain live for such an extended period of time. Skilled attackers are using 0-day vulnerabilities in the wild, and there are increasing instances of:
0-day attacks that rely on vulnerabilities known to the vendor for a long while.
Situations where it became clear that a vulnerability was being actively exploited in the wild, subsequent to the bug being fixed or disclosed.
Accordingly, we believe that responsible disclosure is a two-way street. Vendors, as well as researchers, must act responsibly. Serious bugs should be fixed within a reasonable timescale. Whilst every bug is unique, we would suggest that 60 days is a reasonable upper bound for a genuinely critical issue in widely deployed software. This time scale is only meant to apply to critical issues. Some bugs are mischaracterized as “critical", but we look to established guidelines to help make these important distinctions — e.g.
Chromium severity guidelines
and
Mozilla severity ratings
. As software engineers, we understand the pain of trying to fix, test and release a product rapidly; this especially applies to widely-deployed and complicated client software. Recognizing this, we put a lot of effort into keeping our release processes agile so that security fixes can be pushed out to users as quickly as possible. A lot of talented security researchers work at Google. These researchers discover many vulnerabilities in products from vendors across the board, and they share a detailed analysis of their findings with vendors to help them get started on patch development. We will be supportive of the following practices by our researchers:
Placing a disclosure deadline on any serious vulnerability they report, consistent with complexity of the fix. (For example, a design error needs more time to address than a simple memory corruption bug).
Responding to a missed disclosure deadline or refusal to address the problem by publishing an analysis of the vulnerability, along with any suggested workarounds.
Setting an aggressive disclosure deadline where there exists evidence that blackhats already have knowledge of a given bug.
We of course expect to be held to the same standards ourselves. We recognize that we’ve handled bug reports in the past where we’ve been unable to meet reasonable publication deadlines -- due to unexpected dependencies, code complexity, or even simple mix-ups. In other instances, we’ve simply disagreed with a researcher on the scope or severity of a bug. In all these above cases, we’ve been happy for publication to proceed, and grateful for the heads-up. We would invite other researchers to join us in using the proposed disclosure deadlines to drive faster security response efforts. Creating pressure towards more reasonably-timed fixes will result in smaller windows of opportunity for blackhats to abuse vulnerabilities. In our opinion, this small tweak to the rules of engagement will result in greater overall safety for users of the Internet.
Update September 10, 2010:
We'd like to clarify a few of the points above about how we approach the issue of vulnerability disclosure. While we believe vendors have an obligation to be responsive, the 60 day period before public notification about critical bugs is not intended to be a punishment for unresponsive vendors. We understand that not all bugs can be fixed in 60 days, although many can and should be. Rather, we thought of 60 days when considering how large the window of exposure for a critical vulnerability should be permitted to grow before users are best served by hearing enough details to make a decision about implementing possible mitigations, such as disabling a service, restricting access, setting a killbit, or contacting the vendor for more information. In most cases, we don't feel it's in people's best interest to be kept in the dark about critical vulnerabilities affecting their software for any longer period.
Update May 12, 2021:
February 2015
,
Google's responsible discloure policy was adjusted to 90 days. View our current policy
here
.
Labels
#sharethemicincyber
#supplychain #security #opensource
android
android security
android tr
app security
big data
biometrics
blackhat
C++
chrome
chrome enterprise
chrome security
connected devices
CTF
diversity
encryption
federated learning
fuzzing
Gboard
google play
google play protect
hacking
interoperability
iot security
kubernetes
linux kernel
memory safety
Open Source
pha family highlights
pixel
privacy
private compute core
Rowhammer
rust
Security
security rewards program
sigstore
spyware
supply chain
targeted spyware
tensor
Titan M2
VDP
vulnerabilities
workshop
Archive
2024
Dec
Nov
Oct
Sept
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2023
Dec
Nov
Oct
Sept
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2022
Dec
Nov
Oct
Sept
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2021
Dec
Nov
Oct
Sept
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2020
Dec
Nov
Oct
Sept
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2019
Dec
Nov
Oct
Sept
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2018
Dec
Nov
Oct
Sept
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2017
Dec
Nov
Oct
Sept
Jul
Jun
May
Apr
Mar
Feb
Jan
2016
Dec
Nov
Oct
Sept
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2015
Dec
Nov
Oct
Sept
Aug
Jul
Jun
May
Apr
Mar
Feb
Jan
2014
Dec
Nov
Oct
Sept
Aug
Jul
Jun
Apr
Mar
Feb
Jan
2013
Dec
Nov
Oct
Aug
Jun
May
Apr
Mar
Feb
Jan
2012
Dec
Sept
Aug
Jun
May
Apr
Mar
Feb
Jan
2011
Dec
Nov
Oct
Sept
Aug
Jul
Jun
May
Apr
Mar
Feb
2010
Nov
Oct
Sept
Aug
Jul
May
Apr
Mar
2009
Nov
Oct
Aug
Jul
Jun
Mar
2008
Dec
Nov
Oct
Aug
Jul
May
Feb
2007
Nov
Oct
Sept
Jul
Jun
May
Feed
Follow @google
Follow
Give us feedback in our
Product Forums
.