Security Blog
The latest news and insights from Google on security and safety on the Internet
Guided in-process fuzzing of Chrome components
5 août 2016
Posted by Max Moroz,
Chrome Security Engineer
and Kostya Serebryany,
Sanitizer Tsar
In the past, we’ve posted about innovations in fuzzing, a software testing technique used to discover coding errors and security vulnerabilities. The topics have included
AddressSanitizer
,
ClusterFuzz
,
SyzyASAN
,
ThreadSanitizer
and
others
.
Today we'd like to talk about
libFuzzer
(part of the
LLVM
project), an engine for
in-process, coverage-guided, white-box fuzzing
:
By
in-process
, we mean that we don’t launch a new process for every test case, and that we mutate inputs directly in memory.
By
coverage-guided
, we mean that we measure code coverage for every input, and accumulate test cases that increase overall coverage.
By
white-box
, we mean that we use compile-time instrumentation of the source code.
LibFuzzer makes it possible to fuzz individual components of Chrome. This means you don’t need to generate an HTML page or network payload and launch the whole browser, which adds overhead and flakiness to testing. Instead, you can fuzz any function or internal API directly. Based on our experience, libFuzzer-based fuzzing is extremely efficient, more reliable, and usually thousands of times faster than traditional out-of-process fuzzing.
Our goal is to have fuzz testing for every component of Chrome where fuzzing is applicable, and we hope all Chromium developers and external security researchers will contribute to this effort.
How to write a fuzz target
With libFuzzer, you need to write only one function, which we call a target function or a fuzz target. It accepts a data buffer and length as input and then feeds it into the code we want to test. And... that’s it!
The fuzz targets are not specific to libFuzzer. Currently, we also run them with
AFL
, and we expect to use other fuzzing engines in the future.
Sample Fuzzer
extern
"
C
"
int
LLVMFuzzerTestOneInput
(
const
uint8_t
*
data
,
size_t
size
)
{
std
::
string buf
;
woff2
::
WOFF2StringOut out
(&
buf
)
;
out
.
SetMaxSize
(
30
*
1024
*
1024
)
;
woff2
::
ConvertWOFF2ToTTF
(
data
,
size
,
&
out
)
;
return
0
;
}
See also the
build rule
.
Sample Bug
==9896==ERROR: AddressSanitizer:
heap-buffer-overflow
on address 0x62e000022836 at pc 0x000000499c51 bp 0x7fffa0dc1450 sp 0x7fffa0dc0c00
WRITE of size 41994 at 0x62e000022836 thread T0
SCARINESS: 45 (multi-byte-write-heap-buffer-overflow)
#0 0x499c50 in __asan_memcpy
#1 0x4e6b50 in Read third_party/woff2/src/buffer.h:86:7
#2 0x4e6b50 in ReconstructGlyf third_party/woff2/src/woff2_dec.cc:500
#3 0x4e6b50 in ReconstructFont third_party/woff2/src/woff2_dec.cc:917
#4 0x4e6b50 in woff2::ConvertWOFF2ToTTF(unsigned char const*, unsigned long, woff2::WOFF2Out*) third_party/woff2/src/woff2_dec.cc:1282
#5 0x4dbfd6 in LLVMFuzzerTestOneInput testing/libfuzzer/fuzzers/convert_woff2ttf_fuzzer.cc:15:3
Check out
our documentation
for additional information.
Integrating LibFuzzer with ClusterFuzz
ClusterFuzz
is Chromium’s infrastructure for large scale fuzzing. It automates crash detection, report deduplication, test minimization, and other tasks. Once you commit a fuzz target into the Chromium codebase (
examples
), ClusterFuzz will automatically pick it up and fuzz it with libFuzzer and AFL.
ClusterFuzz supports most of the libFuzzer features like dictionaries, seed corpus and custom options for different fuzzers. Check out our
Efficient Fuzzer Guide
to learn how to use them.
Besides the initial seed corpus, we store, minimize, and synchronize the corpora for every fuzzer and across all bots. This allows us to continuously increase code coverage over time and find interesting bugs along the way.
ClusterFuzz uses the following memory debugging tools with libFuzzer-based fuzzers:
AddressSanitizer (ASan):
500 GCE VMs
MemorySanitizer (MSan):
100 GCE VMs
UndefinedBehaviorSanitizer (UBSan):
100 GCE VMs
Sample Fuzzer Statistics
It’s important to track and analyze performance of fuzzers. So, we have this dashboard to track fuzzer statistics, that is accessible to all chromium developers:
Overall statistics for the last 30 days:
120
fuzzers
112
bugs filed
Aaaaaand….
14,366,371,459,772 unique test inputs!
Analysis of the bugs found so far
Looking at the
324
bugs found so far, we can say that ASan and MSan have been very effective memory tools for finding security vulnerabilities. They give us comparable numbers of crashes, though ASan crashes usually are more severe than MSan ones. LSan (part of ASan) and UBSan have a great impact for Stability - another one of our
4 core principles
.
Extending Chrome’s Vulnerability Reward Program
Under Chrome's Trusted Researcher Program, we invite submission of fuzzers. We run them for you on ClusterFuzz and automatically nominate bugs they find for reward payments.
Today we're pleased to announce that the invite-only Trusted Researcher Program is being replaced with the Chrome Fuzzer Program which encourages fuzzer submissions from all, and also covers libFuzzer-based fuzzers! Full guidelines are listed on
Chrome’s Vulnerability Reward Program page
.
New research: Zeroing in on deceptive software installations
4 août 2016
Posted by Kurt Thomas, Research Scientist and Juan A. Elices Crespo, Software Engineer
As part of Google’s ongoing effort to
protect users from unwanted software
, we have been zeroing in on the deceptive installation tactics and actors that play a role in unwanted software delivery. This software includes
unwanted ad injectors
that insert unintended ads into webpages and
browser settings hijackers
that change search settings without user consent.
Every week, Google Safe Browsing generates over 60 million warnings to help users avoid installing unwanted software--that’s more than 3x the number of warnings we show for malware. Many of these warnings appear when users unwittingly download software bundles laden with several additional applications, a business model known as pay-per-install that earns up to $1.50 for each successful install. Recently, we finished the first in-depth investigation with
NYU Tandon School of Engineering
into multiple pay-per-install networks and the unwanted software families purchasing installs. The full report, which you can
read here
, will be presented next week at the
USENIX Security Symposium
.
Over a year-long period, we found four of the largest pay-per-install networks routinely distributed unwanted ad injectors, browser settings hijackers, and scareware flagged by over 30 anti-virus engines. These bundles were deceptively promoted through fake software updates, phony content lockers, and spoofed brands--techniques openly discussed on underground forums as ways to trick users into unintentionally downloading software and accepting the installation terms. While not all software bundles lead to unwanted software, critically, it takes only one deceptive party in a chain of web advertisements, pay-per-install networks, and application developers for abuse to manifest.
Behind the scenes of unwanted software distribution
Software bundle installation dialogue. Accepting the express install option will cause eight other programs to be installed with no indication of each program’s functionality.
If you have ever encountered an installation dialog like the one above, then you are already familiar with the pay-per-install distribution model. Behind the scenes there are a few different players:
Advertisers
: In pay-per-install lingo, advertisers are software developers, including unwanted software developers, paying for installs via bundling. In our example above, these advertisers include Plus-HD and Vuupc among others. The cost per install ranges anywhere from $0.10 in South America to $1.50 in the United States. Unwanted software developers will recoup this loss via ad injection, selling search traffic, or levying subscription fees. During our investigation, we identified 1,211 advertisers paying for installs.
Affiliate networks
: Affiliate networks serve as middlemen between advertisers looking to buy installs and popular software packages willing to bundle additional applications in return for a fee. These affiliate networks provide the core technology for tracking successful installs and billing. Additionally, they provide tools that attempt to thwart Google Safe Browsing or anti-virus detection. We spotted at least 50 affiliate networks fueling this business.
Publishers
: Finally, popular software applications re-package their binaries to include several advertiser offers. Publishers are then responsible for getting users to download and install their software through whatever means possible: download portals, organic page traffic, or often times deceptive ads. Our study uncovered 2,518 publishers distributing through 191,372 webpages.
This decentralized model encourages advertisers to focus solely on monetizing users upon installation and for publishers to maximize conversion, irrespective of the final user experience. It takes only one bad actor anywhere in the distribution chain for unwanted installs to manifest.
What gets bundled?
We monitored the offers bundled by four of the largest pay-per-install affiliate networks on a daily basis for over a year. In total, we collected 446K offers related to 843 unique software packages. The most commonly bundled software included unwanted ad injectors, browser settings hijackers, and scareware purporting to fix urgent issues with a victim’s machine for $30-40. Here’s an example of an ad injector impersonating an anti-virus alert to scam users into fixing non-existent system issues:
Deceptive practices
Taken as a whole, we found 59% of weekly offers bundled by pay-per-install affiliate networks were flagged by at least one anti-virus engine as potentially unwanted. In response, software bundles will first fingerprint a user’s machine prior to installation to detect the presence of “hostile” anti-virus engines. Furthermore, in response to protections provide by Google Safe Browsing, publishers have resorted to increasingly convoluted tactics to try and avoid detection, like the defunct technique shown below of password protecting compressed binaries:
Paired with deceptive promotional tools like
fake video codecs, software updates, or misrepresented brands
, there are a multitude of deceptive behaviors currently pervasive to software bundling.
Cleaning up the ecosystem
We are
constantly improving Google Safe Browsing
defenses and the
Chrome Cleanup Tool
to protect users from unwanted software installs. When it comes to our
ads policy
, we take quick action to block and remove advertisers who misrepresent downloads or distribute software that violates Google’s
unwanted software policy
.
Additionally, Google is pushing for real change from businesses involved in the pay-per-install market to address the deceptive practices of some participants. As part of this, Google recently hosted a Clean Software Summit bringing together members of the anti-virus industry, bundling platforms, and the
Clean Software Alliance
. Together, we laid the groundwork for an industry-wide initiative to provide users with clear choices when installing software and to block deceptive actors pushing unwanted installs. We continue to advocate on behalf of users to ensure they remain safe while downloading software online.
Adding YouTube and Calendar to the HTTPS Transparency Report
1 août 2016
Posted by Emily Schechter, HTTPS Enthusiast
Earlier this year, we
launched
a new section of our Transparency Report dedicated to HTTPS encryption. This report shows how much traffic is encrypted for Google products and popular sites across the web. Today, we’re adding two Google products to the report: YouTube and Calendar. The traffic for both products is currently more than 90% encrypted via HTTPS.
Case study: YouTube
As we’ve implemented HTTPS across products over the years, we’ve worked through a wide variety of technical obstacles. Below are some of the challenges we faced during
YouTube’s two year road to HTTPS
:
Lots of traffic!
Our CDN, the
Google Global Cache
, serves a massive amount of video, and migrating it all to HTTPS is no small feat. Luckily, hardware acceleration for AES is widespread, so we were able to encrypt virtually all video serving without adding machines. (Yes,
HTTPS is fast now
.)
Lots of devices!
You can watch YouTube videos on everything from flip phones to smart TVs. We A/B tested HTTPS on every device to ensure that users would not be negatively impacted. We found that HTTPS improved quality of experience on most clients: by ensuring content integrity, we virtually eliminated many types of streaming errors.
Lots of requests!
Mixed content—any insecure request made in a secure context—poses a challenge for any large website or app. We get an alert when an insecure request is made from any of our clients and eventually will block all mixed content using
Content Security Policy
on the web,
App Transport Security on iOS
, and
uses CleartextTraffic
on Android. Ads on YouTube have used HTTPS
since 2014
.
We're also proud to be using
HTTP Secure Transport Security (HSTS)
on youtube.com to cut down on HTTP to HTTPS redirects. This improves both security and latency for end users. Our HSTS lifetime is one year, and we hope to preload this soon in web browsers.
97% for YouTube is pretty good, but why isn't YouTube at 100%? In short, some devices do not fully support modern HTTPS. Over time, to keep YouTube users as safe as possible, we will gradually phase out insecure connections.
We know that any non-secure HTTP traffic could be vulnerable to attackers. All websites and apps should be protected with HTTPS — if you’re a developer that hasn’t yet migrated,
get started
today.
Libellés
#sharethemicincyber
#supplychain #security #opensource
android
android security
android tr
app security
big data
biometrics
blackhat
C++
chrome
chrome enterprise
chrome security
connected devices
CTF
diversity
encryption
federated learning
fuzzing
Gboard
google play
google play protect
hacking
interoperability
iot security
kubernetes
linux kernel
memory safety
Open Source
pha family highlights
pixel
privacy
private compute core
Rowhammer
rust
Security
security rewards program
sigstore
spyware
supply chain
targeted spyware
tensor
Titan M2
VDP
vulnerabilities
workshop
Archive
2024
nov.
oct.
sept.
août
juill.
juin
mai
avr.
mars
févr.
janv.
2023
déc.
nov.
oct.
sept.
août
juill.
juin
mai
avr.
mars
févr.
janv.
2022
déc.
nov.
oct.
sept.
août
juill.
juin
mai
avr.
mars
févr.
janv.
2021
déc.
nov.
oct.
sept.
août
juill.
juin
mai
avr.
mars
févr.
janv.
2020
déc.
nov.
oct.
sept.
août
juill.
juin
mai
avr.
mars
févr.
janv.
2019
déc.
nov.
oct.
sept.
août
juill.
juin
mai
avr.
mars
févr.
janv.
2018
déc.
nov.
oct.
sept.
août
juill.
juin
mai
avr.
mars
févr.
janv.
2017
déc.
nov.
oct.
sept.
juill.
juin
mai
avr.
mars
févr.
janv.
2016
déc.
nov.
oct.
sept.
août
juill.
juin
mai
avr.
mars
févr.
janv.
2015
déc.
nov.
oct.
sept.
août
juill.
juin
mai
avr.
mars
févr.
janv.
2014
déc.
nov.
oct.
sept.
août
juill.
juin
avr.
mars
févr.
janv.
2013
déc.
nov.
oct.
août
juin
mai
avr.
mars
févr.
janv.
2012
déc.
sept.
août
juin
mai
avr.
mars
févr.
janv.
2011
déc.
nov.
oct.
sept.
août
juill.
juin
mai
avr.
mars
févr.
2010
nov.
oct.
sept.
août
juill.
mai
avr.
mars
2009
nov.
oct.
août
juill.
juin
mars
2008
déc.
nov.
oct.
août
juill.
mai
févr.
2007
nov.
oct.
sept.
juill.
juin
mai
Feed
Follow @google
Follow
Give us feedback in our
Product Forums
.