The latest news and insights from Google on security and safety on the Internet
Updates to the Google Safe Browsing’s Site Status Tool
March 29, 2017
Posted Deeksha Padma Prasad and Allison Miller, Safe Browsing
Google Safe Browsing
gives users tools to help protect themselves from web-based threats like malware, unwanted software, and social engineering. We are best known for our warnings, which users see when they attempt to navigate to dangerous sites or download dangerous files. We also provide other tools, like the
Site Status Tool
, where people can check the current safety status of a web page (without having to visit it).
We host this tool within Google’s
Safe Browsing Transparency Report
. As with other sections in Google’s
we make this data available to give the public more visibility into the security and health of the online ecosystem. Users of the
Site Status Tool
input a webpage (as a URL, website, or domain) into the tool, and the most recent results of the Safe Browsing analysis for that webpage are returned...plus references to troubleshooting help and educational materials.
We’ve just launched a new version of the
Site Status Tool
that provides simpler, clearer results and is better designed for the primary users of the page: people who are visiting the tool from a Safe Browsing warning they’ve received, or doing casual research on Google’s malware and phishing detection. The tool now features a cleaner UI, easier-to-interpret language, and more precise results. We’ve also moved some of the more technical data on associated ASes (autonomous systems) over to the
malware dashboard section of the report
While the interface has been streamlined, additional diagnostic information is not gone: researchers who wish to find more details can drill-down elsewhere in
Safe Browsing’s Transparency Report
, while site-owners can find additional diagnostic information in
. One of the goals of the Transparency Report is to shed light on complex policy and security issues, so, we hope the design adjustments will indeed provide our users with additional clarity.
Reassuring our users about government-backed attack warnings
March 24, 2017
Posted by Shane Huntley, Google Threat Analysis Group
, we’ve warned our users if we believe their Google accounts are being targeted by government-backed attackers.
We send these out of an abundance of caution — the notice does not necessarily mean that the account has been compromised or that there is a widespread attack. Rather, the notice reflects our assessment that a government-backed attacker has likely attempted to access the user’s account or computer through phishing or malware, for example. You can read more about these warnings
In order to secure some of the details of our detection, we often send a batch of warnings to groups of at-risk users at the same time, and not necessarily in real-time. Additionally, we never indicate which government-backed attackers we think are responsible for the attempts; different users may be targeted by different attackers.
Security has always been a top priority for us. Robust, automated protections help prevent scammers from signing into your Google account,
Gmail always uses an encrypted connection
when you receive or send email, we filter more than
99.9% of spam
— a common source of phishing messages — from Gmail, and we show users when messages are from an
unverified or unencrypted source
An extremely small fraction of users will ever see one of these warnings, but if you receive this warning from us, it's important to
take action on it
. You can always take a two-minute
, and for
maximum protection from phishing
, enable two-step verification with a Security Key.
Diverse protections for a diverse ecosystem: Android Security 2016 Year in Review
March 22, 2017
Posted by Adrian Ludwig & Mel Miller, Android Security Team
Today, we’re sharing the third annual Android Security Year In Review, a comprehensive look at our work to protect more than 1.4 billion Android users and their data.
Our goal is simple: keep users safe. In 2016, we improved our abilities to stop dangerous apps, built new security features into Android 7.0 Nougat, and collaborated with device manufacturers, researchers, and other members of the Android ecosystem. For more details, you can read the
full Year in Review report
or watch our
Protecting users from PHAs
It’s critical to keep people safe from
Potentially Harmful Apps (PHAs)
that may put their data or devices at risk. Our ongoing work in this area requires us to find ways to track and stop existing PHAs, and anticipate new ones that haven’t even emerged yet.
Over the years, we’ve built a variety of systems to address these threats, such as application analyzers that constantly review apps for unsafe behavior, and Verify Apps which regularly checks users’ devices for PHAs. When these systems detect PHAs, we warn users, suggest they think twice about downloading a particular app, or even remove the app from their devices entirely.
We constantly monitor threats and improve our systems over time. Last year’s data reflected those improvements: Verify Apps conducted 750 million daily checks in 2016, up from 450 million the previous year, enabling us to reduce the PHA installation rate in the top 50 countries for Android usage.
Google Play continues to be the safest place for Android users to download their apps. Installs of PHAs from Google Play decreased in nearly every category:
Now 0.016 percent of installs, trojans dropped by 51.5 percent compared to 2015
Now 0.003 percent of installs, hostile downloaders dropped by 54.6 percent compared to 2015
Now 0.003 percent of installs, backdoors dropped by 30.5 percent compared to 2015
Now 0.0018 percent of installs, phishing apps dropped by 73.4 percent compared to 2015
By the end of 2016, only 0.05 percent of devices that downloaded apps exclusively from Play contained a PHA; down from 0.15 percent in 2015.
Still, there’s more work to do for devices overall, especially those that install apps from multiple sources. While only 0.71 percent of all Android devices had PHAs installed at the end of 2016, that was a slight increase from about 0.5 percent in the beginning of 2015. Using improved tools and the knowledge we gained in 2016, we think we can reduce the number of devices affected by PHAs in 2017, no matter where people get their apps.
New security protections in Nougat
Last year, we introduced a
variety of new protections in Nougat
, and continued our ongoing work to
strengthen the security of the Linux Kernel
: In Nougat, we introduced file-based encryption which enables each user profile on a single device to be encrypted with a unique key. If you have personal and work accounts on the same device, for example, the key from one account can’t unlock data from the other. More broadly, encryption of user data has been required for capable Android devices since in late 2014, and we now see that feature enabled on over 80 percent of Android Nougat devices.
New audio and video protections
: We did significant work to
improve security and re-architect
how Android handles video and audio media. One example: we now store different media components into individual sandboxes, where previously they lived together. Now, if one component is compromised, it doesn’t automatically have permissions to other components, which helps contain any additional issues.
Even more security for enterprise users
: We introduced a
variety of new enterprise security features
including “Always On” VPN, which protects your data from the moment your device boots up and ensures it isn't traveling from a work phone to your personal device via an insecure connection. We also added security policy transparency, process logging, improved wifi certification handling, and client certification improvements to our
growing set of enterprise tools
Working together to secure the Android ecosystem.
Sharing information about security threats between Google, device manufacturers, the research community, and others helps keep all Android users safer. In 2016, our biggest collaborations were via our monthly security updates program and ongoing partnership with the security research community.
Security updates are regularly highlighted as a pillar of mobile security—and rightly so. We
launched our monthly security updates program
in 2015, following the public disclosure of a bug in Stagefright, to help accelerate patching security vulnerabilities across devices from many different device makers. This program expanded significantly in 2016:
More than 735 million devices from 200+ manufacturers received a platform security update in 2016.
We released monthly Android security updates throughout the year for devices running Android 4.4.4 and up—that accounts for 86.3 percent of all active Android devices worldwide.
Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016.
We provided monthly security updates for all supported Pixel and Nexus devices throughout 2016, and we’re thrilled to see our partners invest significantly in regular updates as well. There’s still a lot of room for improvement, however. About half of devices in use at the end of 2016 had not received a platform security update in the previous year. We’re working to increase device security updates by streamlining our security update program to make it easier for manufacturers to deploy security patches and releasing
to make it easier for users to apply those patches.
On the research side, our Android Security Rewards program grew rapidly: we
paid researchers nearly $1 million dollars
for their reports in 2016. In parallel, we worked closely with various security firms to identify and quickly fix issues that may have posed risks to our users.
We appreciate all of the hard work by Android partners, external researchers, and teams at Google that led to the progress the ecosystem has made with security in 2016. But it doesn’t stop there. Keeping users safe requires constant vigilance and effort. We’re looking forward to new insights and progress in 2017 and beyond.
Detecting and eliminating Chamois, a fraud botnet on Android
March 13, 2017
Posted by Security Software Engineers—Bernhard Grill, Megan Ruthven, and Xin Zhao
Google works hard to protect users across a variety of devices and environments. Part of this work involves defending users against
Potentially Harmful Applications
(PHAs), an effort that gives us the opportunity to observe various types of threats targeting our ecosystem. For example, our security teams recently discovered and defended users of our ads and Android systems against a new PHA family we've named Chamois.
Chamois is an Android PHA family capable of:
Generating invalid traffic
through ad pop ups having deceptive graphics inside the ad
artificial app promotion
by automatically installing apps in the background
premium text messages
Downloading and executing additional plugins
Interference with the ads ecosystem
We detected Chamois during a routine ad traffic quality evaluation. We analyzed malicious apps based on Chamois, and found that they employed several methods to avoid detection and tried to trick users into clicking ads by displaying deceptive graphics. This sometimes resulted in downloading of other apps that commit SMS fraud. So we blocked the Chamois app family using
and also kicked out bad actors who were trying to game our ad systems.
Our previous experience with ad fraud apps like this one enabled our teams to swiftly take action to protect both our advertisers and Android users. Because the malicious app didn't appear in the device's app list, most users wouldn't have seen or known to uninstall the unwanted app. This is why Google's
is so valuable, as it helps users discover PHAs and delete them.
Under Chamois's hood
Chamois was one of the largest PHA families seen on Android to date and distributed through multiple channels. To the best of our knowledge Google is the first to publicly identify and track Chamois.
Chamois had a number of features that made it unusual, including:
: Its code is executed in 4 distinct stages using different file formats, as outlined in this diagram.
This multi-stage process makes it more complicated to immediately identify apps in this family as a PHA because the layers have to be peeled first to reach the malicious part. However, Google's pipelines weren't tricked as they are designed to tackle these scenarios properly.
: Chamois tried to evade detection using obfuscation and anti-analysis techniques, but our systems were able to counter them and detect the apps accordingly.
Custom encrypted storage
: The family uses a custom, encrypted file storage for its configuration files and additional code that required deeper analysis to understand the PHA.
: Our security teams sifted through more than 100K lines of sophisticated code written by seemingly professional developers. Due to the sheer size of the APK, it took some time to understand Chamois in detail.
Google's approach to fighting PHAs
Verify Apps protects users from known PHAs by warning them when they are downloading an app that is determined to be a PHA, and it also enables users to uninstall the app if it has already been installed. Additionally, Verify Apps monitors the state of the Android ecosystem for anomalies and investigates the ones that it finds. It also helps finding unknown PHAs through behavior analysis on devices. For example, many apps downloaded by Chamois were highly ranked by the
. We have implemented rules in Verify Apps to protect users against Chamois.
Google continues to significantly invest in its counter-abuse technologies for Android and its ad systems, and we're proud of the work that many teams do behind the scenes to fight PHAs like Chamois.
We hope this summary provides insight into the growing complexity of Android botnets. To learn more about Google's anti-PHA efforts and further ameliorate the risks they pose to users, devices, and ad systems, keep an eye open for the upcoming "Android Security 2016 Year In Review" report.
VRP news from Nullcon
March 2, 2017
Posted by Josh Armour, Security Program Manager
We’re thrilled to be joining the security research community at
this week in Goa, India. This is a hugely important event for the
Google Vulnerability Rewards Program
and for our work with the security research community, more broadly. To mark the occasion, we wanted to share a few updates about the VRP.
Tougher bugs, bigger rewards
Since the launch of our program in 2010, Google has offered a range of rewards: from $100 USD for low severity issues, up to $20,000 USD for critical vulnerabilities in our web properties (see
rewards). But, because high severity vulnerabilities have become harder to identify over the years, researchers have needed more time to find them. We want to demonstrate our appreciation for the significant time researchers dedicate to our program, and so we’re making some changes to our VRP.
Starting today we will be increasing the reward for “Remote Code Execution” on the Google VRP from $20,000 USD to $31,337 USD. We are increasing the reward for “Unrestricted file system or database access” from $10,000 USD to $13,337 USD as well. Please check out the
for more details and specifics.
Also, we are now donating rewards attributed to reports generated from our internal web security scanner; we have donated over $8000 to
this year so far.
Cloud Security Scanner
allows App Engine customers to utilize a version of the same tool.
Growing the security research community in India
2016’s VRP Year in Review
, we featured Jasminder Pal Singh, a longtime contributor who uses rewards to fund his startup, Jasminder Web Services Point. He’s emblematic of the vibrant and fast-growing computer security research community in India. We saw that new momentum reflected in last year’s VRP data: India was surpassed only by two other locations in terms of total individual researchers paid. We received reports from ~40% more Indian researchers (as compared to 2015) and gave out 30% more rewards which almost tripled the total, and doubled the average payout (both per researcher and per reward). We are excited to see this growth as all users of Google’s products benefit.
Globally, we’ve noticed other
. Russia has consistently occupied a position in the top 10 every year the last 7 years. We have noticed a 3X increase in reports from Asia, making up 70% of the Android Security Rewards for 2016. We have seen increases in the number of researchers reporting valid bugs from Germany (27%), and France (44%). France broke into our top 5 countries in 2016 for the first time.
In 2016, we delivered technical talks along with educational trainings to an audience of enthusiastic security professionals in Goa at the Nullcon security conference. This year, we continue our investment at Nullcon to deliver
focused on the growing group of bug hunters we see in India. If you are attending Nullcon please stop by and say “Hello”!
Expanding protection for Chrome users on macOS
March 1, 2017
Posted by Kylie McRoberts and Ryan Rasti
is broadening its protection of macOS devices, enabling safer browsing experiences by improving defenses against unwanted software and malware targeting macOS. As a result, macOS users may start seeing more warnings when they navigate to dangerous sites or download dangerous files (example warning below).
As part of this next step towards reducing macOS-specific malware and unwanted software, Safe Browsing is focusing on two common abuses of browsing experiences: unwanted ad injection, and manipulation of Chrome user settings, specifically the start page, home page, and default search engine. Users deserve full control of their browsing experience and
Unwanted Software Policy
violations hurt that experience.
The recently released
Chrome Settings API for Mac
gives developers the tools to make sure users stay in control of their Chrome settings. From here on, the Settings Overrides API will be the only approved path for making changes to Chrome settings on Mac OSX, like it currently is on Windows. Also, developers should know that only extensions hosted in the Chrome Web Store are allowed to make changes to Chrome settings.
Starting March 31 2017, Chrome and Safe Browsing will warn users about software that attempts to modify Chrome settings without using the API.
For more information about the criteria we use to guide our efforts to protect Safe Browsing’s users, please visit our
malware and unwanted software help center
E2EMail research project has left the nest
February 24, 2017
Posted by KB Sriram, Eduardo Vela Nava, and Stephan Somogyi, Security and Privacy Engineering
Whether they’re concerned about insider risks, compelled data disclosure demands, or other perceived dangers, some people prudently use end-to-end email encryption to limit the scope of systems they have to trust. The best-known method, PGP, has long been available in command-line form, as a plug-in for IMAP-based email clients, and it clumsily interoperates with Gmail by cut-and-paste. All these scenarios have demonstrated over 25 years that it’s too hard to use. Chromebook users also have never had a good solution; choosing between strong crypto and a strong endpoint device is unsatisfactory.
These are some of the reasons we’ve continued working on the
End-To-End research effort
. One of the things we’ve done over the past year is add the resulting
code to GitHub: E2EMail is not a Google product, it’s now a fully community-driven open source project, to which passionate security engineers from across the industry have already contributed.
E2EMail offers one approach to integrating OpenPGP into Gmail via a Chrome Extension, with improved usability, and while carefully keeping all cleartext of the message body exclusively on the client. E2EMail is built on a proven, open source
library developed at Google.
E2EMail in its current incarnation uses a bare-bones central keyserver for testing, but the recent
Key Transparency announcement
is crucial to its further evolution. Key discovery and distribution lie at the heart of the usability challenges that OpenPGP implementations have faced. Key Transparency delivers a solid, scalable, and thus practical solution, replacing the problematic
model traditionally used with PGP.
We look forward to working alongside the community to integrate E2EMail with the Key Transparency server, and beyond. If you’re interested in delving deeper, check out the
repository on GitHub.
Announcing the first SHA1 collision
February 23, 2017
Posted by Marc Stevens (CWI Amsterdam), Elie Bursztein (Google), Pierre Karpman (CWI Amsterdam), Ange Albertini (Google), Yarik Markov (Google), Alex Petit Bianco (Google), Clement Baisse (Google)
Cryptographic hash functions like SHA-1 are a cryptographer’s swiss army knife. You’ll find that hashes play a role in browser security, managing code repositories, or even just detecting duplicate files in storage. Hash functions compress large amounts of data into a small message digest. As a cryptographic requirement for wide-spread use, finding two messages that lead to the same digest should be computationally infeasible. Over time however, this requirement can fail due to
attacks on the mathematical underpinnings
of hash functions or to increases in computational power.
Today, more than 20 years after of SHA-1 was first introduced, we are announcing the first practical technique for generating a collision. This represents the culmination of two years of research that sprung from a collaboration between the
CWI Institute in Amsterdam
and Google. We’ve summarized how we went about generating a collision below. As a proof of the attack, we are
releasing two PDFs
that have identical SHA-1 hashes but different content.
For the tech community, our findings emphasize the necessity of sunsetting SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. As early as 2014, the Chrome team
that they would gradually phase out using SHA-1. We hope our practical attack on SHA-1 will cement that the protocol should no longer be considered secure.
We hope that our practical attack against SHA-1 will finally convince the industry that it is urgent to move to safer alternatives such as SHA-256.
What is a cryptographic hash collision?
A collision occurs when two distinct pieces of data—a document, a binary, or a website’s certificate—hash to the same digest as shown above. In practice, collisions should never occur for secure hash functions. However if the hash algorithm has some flaws, as SHA-1 does, a well-funded attacker can craft a collision. The attacker could then use this collision to deceive systems that rely on hashes into accepting a malicious file in place of its benign counterpart. For example, two insurance contracts with drastically different terms.
Finding the SHA-1 collision
published a paper that outlined a theoretical approach to create a SHA-1 collision. We started by creating a
specifically crafted to allow us to generate two documents with arbitrary distinct visual contents, but that would hash to the same SHA-1 digest. In building this theoretical attack in practice we had to overcome some new challenges. We then leveraged Google’s technical expertise and cloud infrastructure to compute the collision which is one of the largest computations ever completed.
Here are some numbers that give a sense of how large scale this computation was:
Nine quintillion (9,223,372,036,854,775,808) SHA1 computations in total
6,500 years of CPU computation to complete the attack first phase
110 years of GPU computation to complete the second phase
While those numbers seem very large, the SHA-1 shattered attack is still more than 100,000 times faster than a brute force attack which remains impractical.
Mitigating the risk of SHA-1 collision attacks
Moving forward, it’s more urgent than ever for security practitioners to migrate to safer cryptographic hashes such as SHA-256 and SHA-3. Following
Google’s vulnerability disclosure policy
, we will wait 90 days before releasing code that allows anyone to create a pair of PDFs that hash to the same SHA-1 sum given two distinct images with some pre-conditions. In order to prevent this attack from active use, we’ve added protections for Gmail and GSuite users that detects our PDF collision technique. Furthermore, we are providing a
free detection system
to the public.
You can find more details about the SHA-1 attack and detailed research outlining our techniques
About the team
This result is the product of a long-term collaboration between the CWI institute and Google’s Research security, privacy and anti-abuse group.
started collaborating on making Marc’s cryptanalytic attacks against SHA-1 practical using Google infrastructure.
developed the PDF attack,
worked on the cryptanalysis and the GPU implementation,
took care of the distributed GPU code,
Alex Petit Bianco
implemented the collision detector to protect Google users and Clement Baisse oversaw the reliability of the computations.
Another option for file sharing
February 21, 2017
Posted by Andrew Gerrand, Eric Grosse, Rob Pike, Eduardo Pinheiro and Dave Presotto, Google Software Engineers
Existing mechanisms for file sharing are so fragmented that people waste time on multi-step copying and repackaging. With the new project
, we aim to improve the situation by providing a global name space to name all your files. Given an Upspin name, a file can be shared securely, copied efficiently without "download" and "upload", and accessed by anyone with permission from anywhere with a network connection.
Our target audience is personal users, families, or groups of friends. Although Upspin might have application in enterprise environments, we think that focusing on the consumer case enables easy-to-understand and easy-to-use sharing.
File names begin with the user's email address followed by a slash-separated Unix-like path name:
Any user with appropriate permission can access the contents of this file by using Upspin services to evaluate the full path name, typically via a FUSE filesystem so that unmodified applications just work. Upspin names usually identify regular static files and directories, but may point to dynamic content generated by devices such as sensors or services.
If the user wishes to share a directory (the unit at which sharing privileges are granted), she adds a file called Access to that directory. In that file she describes the rights she wishes to grant and the users she wishes to grant them to. For instance,
read: firstname.lastname@example.org, email@example.com
allows Joe and Mae to read any of the files in the directory holding the Access file, and also in its subdirectories. As well as limiting who can fetch bytes from the server, this access is enforced end-to-end cryptographically, so cleartext only resides on Upspin clients, and use of cloud storage does not extend the trust boundary.
Upspin looks a bit like a global file system, but its real contribution is a set of interfaces, protocols, and components from which an information management system can be built, with properties such as security and access control suited to a modern, networked world. Upspin is not an "app" or a web service, but rather a suite of software components, intended to run in the network and on devices connected to it, that together provide a secure, modern information storage and sharing network. Upspin is a layer of infrastructure that other software and services can build on to facilitate secure access and sharing. This is an open source contribution, not a Google product. We have not yet integrated with the
server, though we expect to eventually, and for now use a similar technique of securely publishing all key updates. File storage is inherently an archival medium without forward secrecy; loss of the user's encryption keys implies loss of content, though we do provide for key rotation.
It’s early days, but we’re encouraged by the progress and look forward to feedback and contributions. To learn more, see the GitHub repository at
Understanding differences between corporate and consumer Gmail threats
February 16, 2017
Posted by Ali Zand and Vijay Eranti, Anti-Abuse Research and Gmail Abuse
We are constantly working to protect our users, and quickly adapt to new online threats. This work never stops: every minute, we prevent over 10 million unsafe or unwanted emails from reaching Gmail users and threatening them with malicious attachments that infect a user’s machine if opened,
asking for banking or account details, and omnipresent
. A cornerstone of our defense is understanding the pulse of the email threat landscape. This awareness helps us to anticipate and react faster to emerging attacks.
Today at RSA, we are sharing key insights about the diversity of threats to corporate Gmail inboxes. We’ve highlighted some of our key findings below; you can see our full presentation
. We’ve already incorporated these insights to help keep our G Suite users safe, and we hope that by exposing these nuances, security and abuse professionals everywhere can better understand their risk profile and customize their defenses accordingly.
How threats to corporate and consumer inboxes differ
While spam may be the most common attack across all inboxes, did you know that malware and phishing are far more likely to target corporate users? Here’s a breakdown of how attacks stack up for corporate vs. personal inboxes:
Different threats to different types of organizations
Attackers appear to choose targets based on multiple dimensions, such as the size and the type of the organization, its country of operation, and the organization’s sector of activity. Let’s look at an example of corporate users across businesses, nonprofits, government-related industries, and education services. If we consider business inboxes as a baseline, we find attackers are far more likely to target nonprofits with malware, while attackers are more likely to target businesses with phishing and spam.
These nuances go all the way down to the granularity of country and industry type. This shows how security and abuse professionals must tailor defenses based on their personalized threat model, where no single corporate user faces the same attacks.
Constant improvements to corporate Gmail protections
Research like this enables us to better protect our users. We are constantly innovating to better protect our users, and we've already implemented these findings into our G Suite protections. Additionally, we have implemented and rolled out several features that help our users stay safe against these ever-evolving threats.
The forefront of our defenses is a state-of-the-art email classifier that detects abusive
messages with 99.9% accuracy
To protect yourself from unsafe websites, make sure to heed
that alert you of potential phishing and malware attacks.
Use many layers of defense: we recommend using a
security key enforcement
(2-step verification) to thwart attackers from accessing your account in the event of a stolen password.
To ensure your email contents’ stays safe and secure in transit, use our
TLS encryption indicator
, to ensure only the intended recipient can read your email.
We will never stop working to keep our users and their inboxes secure. To learn more about how we protect Gmail, check out this YouTube video that summarizes the lessons we learned while protecting Gmail users through the years.
Give us feedback in our
Official Google Blog
Public Policy Blog
Lat Long Blog
Ads Developer Blog
Android Developers Blog