The latest news and insights from Google on security and safety on the Internet
OnHub: Powerful protection for peace of mind
September 27, 2016
Posted by Chris Millikin, Public Defender (Security Engineering Manager)
[Cross-posted from the
Official OnHub Blog
Since OnHub launched, we've highlighted a variety of features that enable users to do the things they love online without having to deal with the annoying router issues that we've all experienced at one time or another. These include: Fast, reliable Wi-Fi for more than 100 devices at a time, easy streaming and sharing, and wide-ranging coverage that helps eliminate dead zones.
We haven't, however, highlighted one of OnHub's most powerful features: Industry-leading security. Your router is the first line of defense for your online world. Because bad actors are aware of the critical position routers occupy in the network, routers are frequently the target of security attacks.
OnHub’s security features go beyond those of the typical router: OnHub is hardened against a variety of attacks, protecting your home network from many online threats. Three features in particular help ensure OnHub protects your data and devices from a variety of threats.
Three Security features that set OnHub apart
1. Defense in Depth
There are many elements that go into creating a robust defense in depth.
Auto updates: OnHub regularly downloads automatic updates without you having to do anything--a long-established practice on mobile devices and software like Chrome, but one that appliances haven’t caught up with yet. These updates provide regular maintenance fixes and address critical vulnerabilities. They’re like the seatbelts of online security—
internet security experts
recommend that users always accept updates.
However, when updates don’t happen automatically, many people don’t bother. OnHub communicates directly with Google, and makes sure all software is signed and verified. For instance, when a vulnerability was found in a software library (
) earlier this year, we were able to update OnHub’s entire fleet of devices within just a few days. In comparison, the vast majority of other routers require active user intervention to protect against such threats.
Verified Boot: Verified Boot protects you by preventing compromised OnHubs from booting. We use this technology in Chromebooks,
strictly enforce it in Android Nougat
, and we implemented it in OnHub from the very beginning. This makes OnHub extremely difficult to attack or compromise. For instance, the device runs software that has been cryptographically signed by Google.
Cloud administration: A traditional router is commonly attacked through its local administration web interface, where attackers have taken advantage of exploits like
to remotely take control and change critical settings like DNS, so we eliminated that from the beginning. Instead OnHub is managed through the cloud, with strong authentication and authorization, using a simple phone app. A read-only API is available only on the internal network, to provide important data to the OnHub app during setup and when troubleshooting.
Process isolation: We also layer multiple techniques such as process isolation (uid/gid separation, namespaces, capability whitelists) and
filtering to isolate network-facing services, which helps reduce potential attack scenarios in a given application by preventing an attacker from making lateral movements in the system.
2. Hardware Provenance
Modern hardware devices include many types of chips, drivers, and firmware. It’s important to know what each part is doing and where it came from. Our security team works to track the origins of all hardware, software, and firmware that goes into OnHub, including those from third-party sources. If a vulnerability is ever found, OnHub security works to fix the problem immediately.
The same goes for the open source components of OnHub. Before shipping, we do comprehensive code reviews of critical attack surfaces (i.e. network facing daemons), looking for security vulnerabilities. For example, we reviewed miniupnpd, hostapd, and dnsmasq. As a result of those reviews, Google reported security bugs to the open source project maintainers and offered patches. Here are three that we fixed:
3. Cloud Intelligence
We use anonymized metrics from our fleet of OnHubs to quickly detect and counter potential threats. For example, since we know that DNS is often a target of attacks, we monitor DNS settings on all OnHub routers for activity that could indicate a security compromise. This is “cloud intelligence” – a benefit that Google is uniquely able to deliver. By connecting OnHub to the Google cloud, we provide the same level of protection you expect across all your Google apps and devices. Because you manage your router through the cloud using your secure Google identity, you don’t have to remember yet another password for managing your OnHub, and you don’t have to be at home to control it.
Security Improvements, Automatically
OnHub also participates in
Google’s Vulnerability Reward Program
, which started in 2010 to honor all of the cutting-edge external contributions that help us keep our users safe. Through this program, if you can find a qualifying bug in OnHub’s security, rewards range from $100 to $20,000. Click
for an outline of the rewards for the most common classes of bugs.
When it comes to security, not all routers are created equal. OnHub protects you and your network with security that continues to adapt to threats. We’re always improving OnHub security, and automatically update without users having to take any actions. As cybersecurity evolves and new threats emerge, OnHub will be ready to meet the latest challenges for years to come.
Reshaping web defenses with strict Content Security Policy
September 26, 2016
Posted by Artur Janc, Michele Spagnuolo, Lukas Weichselbaum, and David Ross, Information Security Engineers
— the ability to inject undesired scripts into a trusted web application — has been one of the top web security vulnerabilities for over a decade. Just in the past 2 years Google has awarded researchers over $1.2 million for reporting XSS bugs in our applications via the
Vulnerability Reward Program
. Modern web technologies such as
strict contextual auto-escaping
help developers avoid mistakes which lead to XSS, and
can catch classes of vulnerabilities during the testing process. However, in complex applications bugs inevitably slip by, allowing attacks ranging from harmless pranks to malicious
Content Security Policy (CSP) is a mechanism designed to step in precisely when such bugs happen; it provides developers the ability to restrict which scripts are allowed to execute so that even if attackers can inject HTML into a vulnerable page, they should not be able to load malicious scripts and other types of resources. CSP is a flexible tool allowing developers to set a wide range of policies; it is supported — though not always in its entirety — by all modern browsers.
However, the flexibility of CSP also leads to its biggest problem: it makes it easy to set policies which appear to work, but offer no real security benefit. In a
recent Internet-wide study
we analyzed over 1 billion domains and found that 95% of deployed CSP policies are ineffective as a protection against XSS. One of the underlying reasons is that out of the 15 domains most commonly whitelisted by developers for loading external scripts as many as 14 expose patterns which allow attackers to bypass CSP protections. We believe it's important to improve this, and help the web ecosystem make full use of the potential of CSP.
Towards safer CSP policies
To help developers craft policies which meaningfully protect their applications, today we’re releasing the
, a tool to visualize the effect of setting a policy and detect subtle misconfigurations. CSP Evaluator is used by security engineers and developers at Google to make sure policies provide a meaningful security benefit and cannot be subverted by attackers.
Even with such a helpful tool, building a safe script whitelist for a complex application is often all but impossible due to the number of popular domains with resources that allow CSP to be bypassed. Here’s where the idea of a nonce-based CSP policy comes in. Instead of whitelisting all allowed script locations, it’s often simpler to modify the application to prove that a script is trusted by the developer by giving it a nonce -- an unpredictable, single-use token which has to match a value set in the policy:
Content-Security-Policy: script-src 'nonce-random123'
<script nonce='random123'>alert('This script will run')</script>
<script>alert('Will not run: missing nonce')</script>
<script nonce='bad123'>alert("Won't run: invalid nonce")</script>
, a part of the upcoming CSP3 specification already
by Chrome and Opera (and coming soon to Firefox), adopting such policies in complex, modern applications becomes much easier. Developers can now set a single, short policy such as:
script-src 'nonce-random123' 'strict-dynamic'; object-src 'none'
and make sure that all static
elements contain a matching nonce attribute — in many cases this is all that’s needed to enjoy added protection against XSS since ‘strict-dynamic’ will take care of loading any trusted scripts added at runtime. This approach allows setting policies which are
with all CSP-aware browsers, and
with applications which already use a traditional CSP policy; it also simplifies the process of adopting CSP and doesn’t require changing the policy as the application evolves.
Adopting strict CSP
In the past months we’ve deployed this approach in several large Google applications, including
and are working on many more. We believe this approach can also help other developers so today we’re publishing documentation discussing the
best strategies for implementing CSP
, including an overview of the
benefits of CSP
, sample policies, and examples of common
Further, today we’re releasing
As with the CSP Evaluator, we use the extension with our applications to help speed up the process of adopting nonce-based CSP policies nonce-based policies across Google.
Encouraging broader use of strict CSP
Finally, today we’re including CSP adoption efforts in the scope of the
Patch Reward Program
; proactive work to help make popular open-source web frameworks compatible with nonce-based CSP can qualify for rewards (but please read the
CSP refactoring tips
first). We hope that increased attention to this area will also encourage researchers to find new, creative ways to circumvent CSP restrictions, and help us further improve the mechanism so that we can better protect Internet users from web threats.
To reach out to us, email firstname.lastname@example.org.
Even More Safe Browsing on Android!
September 15, 2016
Posted by Stephan Somogyi, Safe Browsing Team & William Luh, Android Security Team
During Google I/O in June, we
that we were going to make a device-local Safe Browsing API available to all Android developers later in the year. That time has come!
Google Play Services version 9.4
, all Android developers can use our privacy-preserving, and highly network as well as power-efficient on-device Safe Browsing infrastructure to protect all of their apps’ users. Even better,
the API is simple and straightforward to use
Since we introduced client-side Safe Browsing on Android, updated our
documentation for Safe Browsing Protocol Version 4
(pver4), and also released our
reference pver4 implementation in Go
, we’ve been able to see how much protection this new technology provides to all our users. Since
our initial launch
we’ve shown hundreds of millions of warnings, actively warning many millions of mobile users about badness before they’re exposed to it.
We look forward to all Android developers extending this same protection to their users, too.
Moving towards a more secure web
September 8, 2016
Posted by Emily Schechter, Chrome Security Team
To help users browse the web safely, Chrome indicates connection security with an icon in the address bar. Historically, Chrome has not explicitly labelled HTTP connections as non-secure. Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure.
Chrome currently indicates HTTP connections with a neutral indicator. This doesn’t reflect the true lack of security for HTTP connections. When you load a website over HTTP, someone else on the network can look at or
the site before it gets to you.
A substantial portion of web traffic has transitioned to HTTPS so far, and HTTPS usage is consistently increasing. We recently hit a milestone with more than half of Chrome desktop page loads now served over HTTPS. In addition, since the time we
released our HTTPS report
in February, 12 more of the top 100 websites have changed their serving default from HTTP to HTTPS.
Studies show that users
do not perceive
the lack of a “secure” icon as a warning, but also that users become blind to warnings that occur too frequently.
to label HTTP sites more clearly and accurately as non-secure will take place in gradual steps, based on increasingly stringent criteria. Starting January 2017, Chrome 56 will label HTTP pages with password or credit card form fields as "not secure," given their particularly sensitive nature.
In following releases, we will continue to extend HTTP warnings, for example, by labelling HTTP pages as “not secure” in Incognito mode, where users may have higher expectations of privacy. Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS.
We will publish updates to this plan as we approach future releases, but don’t wait to get started moving to HTTPS. HTTPS is
easier and cheaper than ever before
, and enables both the
the web offers and
that are too sensitive for HTTP. Check out our
to get started.
Keeping Android safe: Security enhancements in Nougat
September 6, 2016
Posted by Xiaowen Xin, Android Security Team
[Cross-posted from the
Android Developers Blog
Over the course of the summer, we previewed a variety of security enhancements in Android 7.0 Nougat: an increased focus on security with our
vulnerability rewards program
, a new
Direct Boot mode
, re-architected mediaserver and
hardened media stack
, apps that are protected from
accidental regressions to cleartext traffic
, an update to the way Android handles
trusted certificate authorities
, strict enforcement of
with error correction, and
updates to the Linux kernel to reduce the attack surface and increase memory protection
Now that Nougat has begun to roll out, we wanted to recap these updates in a single overview and highlight a few new improvements.
Direct Boot and encryption
In previous versions of Android, users with encrypted devices would have to enter their PIN/pattern/password by default during the boot process to decrypt their storage area and finish booting. With Android 7.0 Nougat, we’ve updated the underlying encryption scheme and streamlined the boot process to speed up rebooting your phone. Now your phone’s main features, like the phone app and your alarm clock, are ready right away before you even type your PIN, so people can call you and your alarm clock can wake you up. We call this feature
Under the hood, file-based encryption enables this improved user experience. With this new encryption scheme, the system storage area, as well as each user profile storage area, are all encrypted separately. Unlike with full-disk encryption, where all data was encrypted as a single unit, per-profile-based encryption enables the system to reboot normally into a functional state using just device keys. Essential apps can opt-in to run in a limited state after reboot, and when you enter your lock screen credential, these apps then get access your user data to provide full functionality.
File-based encryption better isolates and protects individual users and profiles on a device by encrypting data at a finer granularity. Each profile is encrypted using a unique key that can only be unlocked by your PIN or password, so that your data can only be decrypted by you.
Encryption support is getting stronger across the Android ecosystem as well. Starting with Marshmallow, all capable devices were required to support encryption. Many devices, like Nexus 5X and 6P also use unique keys that are accessible only with trusted hardware, such as the ARM TrustZone. Now with 7.0 Nougat, all new capable Android devices must also have this kind of hardware support for key storage and provide brute force protection while verifying your lock screen credential before these keys can be used. This way, all of your data can only be decrypted on that exact device and only by you.
The media stack and platform hardening
In Android Nougat, we’ve both hardened and
mediaserver, one of the main system services that processes untrusted input. First, by incorporating integer overflow sanitization, part of Clang’s
, we prevent an entire class of vulnerabilities, which comprise the majority of reported libstagefright bugs. As soon as an integer overflow is detected, we shut down the process so an attack is stopped. Second, we’ve modularized the media stack to put different components into individual sandboxes and tightened the privileges of each sandbox to have the minimum privileges required to perform its job. With this containment technique, a compromise in many parts of the stack grants the attacker access to significantly fewer permissions and significantly reduced exposed kernel attack surface.
In addition to hardening the mediaserver, we’ve added a large list of protections for the platform, including:
Verified Boot: Verified Boot is now strictly enforced to prevent compromised devices from booting; it supports
to improve reliability against non-malicious data corruption.
SELinux: Updated SELinux configuration and increased Seccomp coverage further locks down the application sandbox and reduces attack surface. Library load order randomization and improved ASLR: Increased randomness makes some code-reuse attacks less reliable.
: Added additional memory protection for newer kernels by
marking portions of kernel memory as read-only
restricting kernel access to userspace addresses
, and further reducing the existing attack surface.
APK signature scheme v2
: Introduced a whole-file signature scheme that improves
and strengthens integrity guarantees.
App security improvements
Android Nougat is the safest and easiest version of Android for application developers to use.
Apps that want to share data with other apps now must explicitly opt-in by offering their files through a
. The application private directory (usually /data/data/) is now set to Linux permission 0700 for apps targeting API Level 24+.
To make it easier for apps to control access to their secure network traffic, user-installed certificate authorities and those installed through Device Admin APIs are
no longer trusted by default
for apps targeting API Level 24+. Additionally, all new Android devices must ship with the
same trusted CA store
Network Security Config
, developers can more easily configure network security policy through a declarative configuration file. This includes blocking cleartext traffic, configuring the set of trusted CAs and certificates, and setting up a separate debug configuration.
We’ve also continued to refine app permissions and capabilities to protect you from potentially harmful apps.
To improve device privacy, we have further restricted and removed access to persistent device identifiers such as MAC addresses.
User interface overlays can no longer be displayed on top of permissions dialogs. This “clickjacking” technique was used by some apps to attempt to gain permissions improperly.
We’ve reduced the power of device admin applications so they can no longer change your lockscreen if you have a lockscreen set, and device admin will no longer be notified of impending disable via
. These were tactics used by some ransomware to gain control of a device.
Lastly, we've made significant enhancements to the OTA update system to keep your device up-to-date much more easily with the latest system software and security patches. We've made the install time for OTAs faster, and the OTA size smaller for security updates. You no longer have to wait for the optimizing apps step, which was one of the slowest parts of the update process, because the new JIT compiler has been
to make installs and updates lightning fast.
The update experience is even faster for new Android devices running Nougat with updated firmware. Like they do with Chromebooks, updates are applied in the background while the device continues to run normally. These updates are applied to a different system partition, and when you reboot, it will seamlessly switch to that new partition running the new system software version.
We’re constantly working to improve Android security and Android Nougat brings significant security improvements across all fronts. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at
More Safe Browsing Help for Webmasters
September 6, 2016
Posted by Kelly Hope Harrington, Safe Browsing Team
For more than
, Safe Browsing has helped webmasters via Search Console with information about how to fix security issues with their sites. This includes relevant Help Center articles, example URLs to assist in diagnosing the presence of harmful content, and a process for webmasters to request reviews of their site after security issues are addressed. Over time, Safe Browsing has expanded its protection to cover additional threats to user safety such as
To help webmasters be even more successful in resolving issues, we’re happy to announce that we’ve updated the information available in Search Console in the Security Issues report.
The updated information provides more specific explanations of six different security issues detected by Safe Browsing, including
. These explanations give webmasters more context and detail about what Safe Browsing found. We also offer tailored recommendations for each type of issue, including sample URLs that webmasters can check to identify the source of the issue, as well as specific remediation actions webmasters can take to resolve the issue.
We on the Safe Browsing team definitely recommend
your site in Search Console even if it is not currently experiencing a security issue. We send notifications through Search Console so webmasters can address any issues that appear
as quickly as possible
Our goal is to help webmasters provide a safe and secure browsing experience for their users. We welcome any questions or feedback about the new features on the
Google Webmaster Help Forum
and Google employees are available to help.
For more information about Safe Browsing’s ongoing work to shine light on the state of web security and encourage safer web security practices, check out our summary of trends and findings on the
Safe Browsing Transparency Report
. If you’re interested in the tools Google provides for webmasters and developers dealing with hacked sites, this
provides a great overview.
Guided in-process fuzzing of Chrome components
August 5, 2016
Posted by Max Moroz,
Chrome Security Engineer
and Kostya Serebryany,
In the past, we’ve posted about innovations in fuzzing, a software testing technique used to discover coding errors and security vulnerabilities. The topics have included
Today we'd like to talk about
(part of the
project), an engine for
in-process, coverage-guided, white-box fuzzing
, we mean that we don’t launch a new process for every test case, and that we mutate inputs directly in memory.
, we mean that we measure code coverage for every input, and accumulate test cases that increase overall coverage.
, we mean that we use compile-time instrumentation of the source code.
LibFuzzer makes it possible to fuzz individual components of Chrome. This means you don’t need to generate an HTML page or network payload and launch the whole browser, which adds overhead and flakiness to testing. Instead, you can fuzz any function or internal API directly. Based on our experience, libFuzzer-based fuzzing is extremely efficient, more reliable, and usually thousands of times faster than traditional out-of-process fuzzing.
Our goal is to have fuzz testing for every component of Chrome where fuzzing is applicable, and we hope all Chromium developers and external security researchers will contribute to this effort.
How to write a fuzz target
With libFuzzer, you need to write only one function, which we call a target function or a fuzz target. It accepts a data buffer and length as input and then feeds it into the code we want to test. And... that’s it!
The fuzz targets are not specific to libFuzzer. Currently, we also run them with
, and we expect to use other fuzzing engines in the future.
See also the
on address 0x62e000022836 at pc 0x000000499c51 bp 0x7fffa0dc1450 sp 0x7fffa0dc0c00
WRITE of size 41994 at 0x62e000022836 thread T0
SCARINESS: 45 (multi-byte-write-heap-buffer-overflow)
#0 0x499c50 in __asan_memcpy
#1 0x4e6b50 in Read third_party/woff2/src/buffer.h:86:7
#2 0x4e6b50 in ReconstructGlyf third_party/woff2/src/woff2_dec.cc:500
#3 0x4e6b50 in ReconstructFont third_party/woff2/src/woff2_dec.cc:917
#4 0x4e6b50 in woff2::ConvertWOFF2ToTTF(unsigned char const*, unsigned long, woff2::WOFF2Out*) third_party/woff2/src/woff2_dec.cc:1282
#5 0x4dbfd6 in LLVMFuzzerTestOneInput testing/libfuzzer/fuzzers/convert_woff2ttf_fuzzer.cc:15:3
for additional information.
Integrating LibFuzzer with ClusterFuzz
is Chromium’s infrastructure for large scale fuzzing. It automates crash detection, report deduplication, test minimization, and other tasks. Once you commit a fuzz target into the Chromium codebase (
), ClusterFuzz will automatically pick it up and fuzz it with libFuzzer and AFL.
ClusterFuzz supports most of the libFuzzer features like dictionaries, seed corpus and custom options for different fuzzers. Check out our
Efficient Fuzzer Guide
to learn how to use them.
Besides the initial seed corpus, we store, minimize, and synchronize the corpora for every fuzzer and across all bots. This allows us to continuously increase code coverage over time and find interesting bugs along the way.
ClusterFuzz uses the following memory debugging tools with libFuzzer-based fuzzers:
500 GCE VMs
100 GCE VMs
100 GCE VMs
Sample Fuzzer Statistics
It’s important to track and analyze performance of fuzzers. So, we have this dashboard to track fuzzer statistics, that is accessible to all chromium developers:
Overall statistics for the last 30 days:
14,366,371,459,772 unique test inputs!
Analysis of the bugs found so far
Looking at the
bugs found so far, we can say that ASan and MSan have been very effective memory tools for finding security vulnerabilities. They give us comparable numbers of crashes, though ASan crashes usually are more severe than MSan ones. LSan (part of ASan) and UBSan have a great impact for Stability - another one of our
4 core principles
Extending Chrome’s Vulnerability Reward Program
Under Chrome's Trusted Researcher Program, we invite submission of fuzzers. We run them for you on ClusterFuzz and automatically nominate bugs they find for reward payments.
Today we're pleased to announce that the invite-only Trusted Researcher Program is being replaced with the Chrome Fuzzer Program which encourages fuzzer submissions from all, and also covers libFuzzer-based fuzzers! Full guidelines are listed on
Chrome’s Vulnerability Reward Program page
New research: Zeroing in on deceptive software installations
August 4, 2016
Posted by Kurt Thomas, Research Scientist and Juan A. Elices Crespo, Software Engineer
As part of Google’s ongoing effort to
protect users from unwanted software
, we have been zeroing in on the deceptive installation tactics and actors that play a role in unwanted software delivery. This software includes
unwanted ad injectors
that insert unintended ads into webpages and
browser settings hijackers
that change search settings without user consent.
Every week, Google Safe Browsing generates over 60 million warnings to help users avoid installing unwanted software--that’s more than 3x the number of warnings we show for malware. Many of these warnings appear when users unwittingly download software bundles laden with several additional applications, a business model known as pay-per-install that earns up to $1.50 for each successful install. Recently, we finished the first in-depth investigation with
NYU Tandon School of Engineering
into multiple pay-per-install networks and the unwanted software families purchasing installs. The full report, which you can
, will be presented next week at the
USENIX Security Symposium
Over a year-long period, we found four of the largest pay-per-install networks routinely distributed unwanted ad injectors, browser settings hijackers, and scareware flagged by over 30 anti-virus engines. These bundles were deceptively promoted through fake software updates, phony content lockers, and spoofed brands--techniques openly discussed on underground forums as ways to trick users into unintentionally downloading software and accepting the installation terms. While not all software bundles lead to unwanted software, critically, it takes only one deceptive party in a chain of web advertisements, pay-per-install networks, and application developers for abuse to manifest.
Behind the scenes of unwanted software distribution
Software bundle installation dialogue. Accepting the express install option will cause eight other programs to be installed with no indication of each program’s functionality.
If you have ever encountered an installation dialog like the one above, then you are already familiar with the pay-per-install distribution model. Behind the scenes there are a few different players:
: In pay-per-install lingo, advertisers are software developers, including unwanted software developers, paying for installs via bundling. In our example above, these advertisers include Plus-HD and Vuupc among others. The cost per install ranges anywhere from $0.10 in South America to $1.50 in the United States. Unwanted software developers will recoup this loss via ad injection, selling search traffic, or levying subscription fees. During our investigation, we identified 1,211 advertisers paying for installs.
: Affiliate networks serve as middlemen between advertisers looking to buy installs and popular software packages willing to bundle additional applications in return for a fee. These affiliate networks provide the core technology for tracking successful installs and billing. Additionally, they provide tools that attempt to thwart Google Safe Browsing or anti-virus detection. We spotted at least 50 affiliate networks fueling this business.
: Finally, popular software applications re-package their binaries to include several advertiser offers. Publishers are then responsible for getting users to download and install their software through whatever means possible: download portals, organic page traffic, or often times deceptive ads. Our study uncovered 2,518 publishers distributing through 191,372 webpages.
This decentralized model encourages advertisers to focus solely on monetizing users upon installation and for publishers to maximize conversion, irrespective of the final user experience. It takes only one bad actor anywhere in the distribution chain for unwanted installs to manifest.
What gets bundled?
We monitored the offers bundled by four of the largest pay-per-install affiliate networks on a daily basis for over a year. In total, we collected 446K offers related to 843 unique software packages. The most commonly bundled software included unwanted ad injectors, browser settings hijackers, and scareware purporting to fix urgent issues with a victim’s machine for $30-40. Here’s an example of an ad injector impersonating an anti-virus alert to scam users into fixing non-existent system issues:
Taken as a whole, we found 59% of weekly offers bundled by pay-per-install affiliate networks were flagged by at least one anti-virus engine as potentially unwanted. In response, software bundles will first fingerprint a user’s machine prior to installation to detect the presence of “hostile” anti-virus engines. Furthermore, in response to protections provide by Google Safe Browsing, publishers have resorted to increasingly convoluted tactics to try and avoid detection, like the defunct technique shown below of password protecting compressed binaries:
Paired with deceptive promotional tools like
fake video codecs, software updates, or misrepresented brands
, there are a multitude of deceptive behaviors currently pervasive to software bundling.
Cleaning up the ecosystem
constantly improving Google Safe Browsing
defenses and the
Chrome Cleanup Tool
to protect users from unwanted software installs. When it comes to our
, we take quick action to block and remove advertisers who misrepresent downloads or distribute software that violates Google’s
unwanted software policy
Additionally, Google is pushing for real change from businesses involved in the pay-per-install market to address the deceptive practices of some participants. As part of this, Google recently hosted a Clean Software Summit bringing together members of the anti-virus industry, bundling platforms, and the
Clean Software Alliance
. Together, we laid the groundwork for an industry-wide initiative to provide users with clear choices when installing software and to block deceptive actors pushing unwanted installs. We continue to advocate on behalf of users to ensure they remain safe while downloading software online.
Adding YouTube and Calendar to the HTTPS Transparency Report
August 1, 2016
Posted by Emily Schechter, HTTPS Enthusiast
Earlier this year, we
a new section of our Transparency Report dedicated to HTTPS encryption. This report shows how much traffic is encrypted for Google products and popular sites across the web. Today, we’re adding two Google products to the report: YouTube and Calendar. The traffic for both products is currently more than 90% encrypted via HTTPS.
Case study: YouTube
As we’ve implemented HTTPS across products over the years, we’ve worked through a wide variety of technical obstacles. Below are some of the challenges we faced during
YouTube’s two year road to HTTPS
Lots of traffic!
Our CDN, the
Google Global Cache
, serves a massive amount of video, and migrating it all to HTTPS is no small feat. Luckily, hardware acceleration for AES is widespread, so we were able to encrypt virtually all video serving without adding machines. (Yes,
HTTPS is fast now
Lots of devices!
You can watch YouTube videos on everything from flip phones to smart TVs. We A/B tested HTTPS on every device to ensure that users would not be negatively impacted. We found that HTTPS improved quality of experience on most clients: by ensuring content integrity, we virtually eliminated many types of streaming errors.
Lots of requests!
Mixed content—any insecure request made in a secure context—poses a challenge for any large website or app. We get an alert when an insecure request is made from any of our clients and eventually will block all mixed content using
Content Security Policy
on the web,
App Transport Security on iOS
on Android. Ads on YouTube have used HTTPS
We're also proud to be using
HTTP Secure Transport Security (HSTS)
on youtube.com to cut down on HTTP to HTTPS redirects. This improves both security and latency for end users. Our HSTS lifetime is one year, and we hope to preload this soon in web browsers.
97% for YouTube is pretty good, but why isn't YouTube at 100%? In short, some devices do not fully support modern HTTPS. Over time, to keep YouTube users as safe as possible, we will gradually phase out insecure connections.
We know that any non-secure HTTP traffic could be vulnerable to attackers. All websites and apps should be protected with HTTPS — if you’re a developer that hasn’t yet migrated,
Bringing HSTS to www.google.com
July 29, 2016
Posted by Jay Brown, Sr. Technical Program Manager, Security
For many years, we’ve worked to increase the use of encryption between our users and Google. Today, the
vast majority of these connections
are encrypted, and our work continues on this effort.
To further protect users, we've taken another step to strengthen how we use encryption for data in transit by implementing HTTP Strict Transport Security—
for short—on the
domain. HSTS prevents people from accidentally navigating to HTTP URLs by automatically converting insecure HTTP URLs into secure HTTPS URLs. Users might navigate to these HTTP URLs by manually typing a protocol-less or HTTP URL in the address bar, or by following HTTP links from other websites.
Preparing for launch
Ordinarily, implementing HSTS is a relatively basic process. However, due to Google's particular complexities, we needed to do some extra prep work that most other domains wouldn't have needed to do. For example, we had to address
, bad HREFs, redirects to HTTP, and other issues like updating legacy services which could cause problems for users as they try to access our core domain.
This process wasn’t without its pitfalls. Perhaps most memorably, we accidentally broke Google’s Santa Tracker just before Christmas last year (don’t worry — we fixed it before Santa and his reindeer made their trip).
Deployment and next steps
We’ve turned on HSTS for
, but some work remains on our deployment checklist.
In the immediate term, we’re focused on increasing the duration that the header is active (‘max-age’). We've initially set the header’s max-age to one day; the short duration helps mitigate the risk of any potential problems with this roll-out. By increasing the max-age, however, we reduce the likelihood that an initial request to
happens over HTTP. Over the next few months, we will ramp up the max-age of the header to at least one year.
Encrypting data in transit helps keep our users and their data secure. We’re excited to be implementing HSTS and will continue to extend it to more domains and Google products in the coming months.
Give us feedback in our
Official Google Blog
Public Policy Blog
Lat Long Blog
Ads Developer Blog
Android Developers Blog