Quite a few security companies and organizations produce vulnerability databases, cataloguing bugs and reporting trends across the industry based on the data they compile. There is value in this exercise; specifically, getting a look at examples across a range of companies and industries gives us information about the most common types of threats, as well as how they are distributed.
Unfortunately, the data behind these reports is commonly inaccurate or outdated to some degree. The truth is that maintaining an accurate and reliable database of this type of information is a significant challenge. We most recently saw this reality play out last week after the appearance of the IBM X-Force® 2010 Mid-Year Trend and Risk Report. We questioned a number of surprising findings concerning Google’s vulnerability rate and response record, and after discussions with IBM, we discovered a number of errors that had important implications for the report’s conclusions. IBM worked together with us and promptly issued a correction to address the inaccuracies.
Google maintains a Product Security Response Team that prioritizes bug reports and coordinates their handling across relevant engineering groups. Unsurprisingly, particular attention is paid to high-risk and critical vulnerabilities. For this reason, we were confused by a claim that 33% of critical and high-risk bugs uncovered in our services in the first half of 2010 were left unpatched. We learned after investigating that the 33% figure referred to a single unpatched vulnerability out of a total of three — and importantly, the one item that was considered unpatched was only mistakenly considered a security vulnerability due to a terminology mix-up. As a result, the true unpatched rate for these high-risk bugs is 0 out of 2, or 0%.
How do these types of errors occur? Maintainers of vulnerability databases have a number of factors working against them:
- Vendors disclose their vulnerabilities in inconsistent formats, using different severity classifications. This makes the process of measuring the number of total vulnerabilities assigned to a given vendor much more difficult.
- Assessing the severity, scope, and nature of a bug sometimes requires intimate knowledge of a product or technology, and this can lead to errors and misinterpretation.
- Keeping the fix status updated for thousands of entries is no small task, and we’ve consistently seen long-fixed errors marked as unfixed in a number of databases.
- Not all compilers of vulnerability databases perform their own independent verification of bugs they find reported from other sources. As a result, errors in one source can be replicated to others.
Hey Adam,
ReplyDeleteIs there an email address I can contact you at?
I am involved in doing vulnerability trends research but have also a team that works on a vulnerability database. There's a couple points in your article I'd like to discuss with yourself and Google.
Adam,
ReplyDeleteYou highlight some good points. In addition to transparency, I would like to suggest that compilers create a sort of feedback loop with customers with whom they are sharing the reports. The lack of feedback loop or a channel to further probe into the vulnerability leaves the customers to decipher the report on their own without having an intimate knowledge of the product design or the how the report was compiled.
Thanks,
Saqib
David,
ReplyDeleteYou can contact the Google Security Team at security@google.com. All best,
Jay
Google Communications