The latest news and insights from Google on security and safety on the Internet
Announcing the first SHA1 collision
February 23, 2017
Posted by Marc Stevens (CWI Amsterdam), Elie Bursztein (Google), Pierre Karpman (CWI Amsterdam), Ange Albertini (Google), Yarik Markov (Google), Alex Petit Bianco (Google), Clement Baisse (Google)
Cryptographic hash functions like SHA-1 are a cryptographer’s swiss army knife. You’ll find that hashes play a role in browser security, managing code repositories, or even just detecting duplicate files in storage. Hash functions compress large amounts of data into a small message digest. As a cryptographic requirement for wide-spread use, finding two messages that lead to the same digest should be computationally infeasible. Over time however, this requirement can fail due to
attacks on the mathematical underpinnings
of hash functions or to increases in computational power.
Today, more than 20 years after of SHA-1 was first introduced, we are announcing the first practical technique for generating a collision. This represents the culmination of two years of research that sprung from a collaboration between the
CWI Institute in Amsterdam
and Google. We’ve summarized how we went about generating a collision below. As a proof of the attack, we are
releasing two PDFs
that have identical SHA-1 hashes but different content.
For the tech community, our findings emphasize the necessity of sunsetting SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. As early as 2014, the Chrome team
that they would gradually phase out using SHA-1. We hope our practical attack on SHA-1 will cement that the protocol should no longer be considered secure.
We hope that our practical attack against SHA-1 will finally convince the industry that it is urgent to move to safer alternatives such as SHA-256.
What is a cryptographic hash collision?
A collision occurs when two distinct pieces of data—a document, a binary, or a website’s certificate—hash to the same digest as shown above. In practice, collisions should never occur for secure hash functions. However if the hash algorithm has some flaws, as SHA-1 does, a well-funded attacker can craft a collision. The attacker could then use this collision to deceive systems that rely on hashes into accepting a malicious file in place of its benign counterpart. For example, two insurance contracts with drastically different terms.
Finding the SHA-1 collision
published a paper that outlined a theoretical approach to create a SHA-1 collision. We started by creating a
specifically crafted to allow us to generate two documents with arbitrary distinct visual contents, but that would hash to the same SHA-1 digest. In building this theoretical attack in practice we had to overcome some new challenges. We then leveraged Google’s technical expertise and cloud infrastructure to compute the collision which is one of the largest computations ever completed.
Here are some numbers that give a sense of how large scale this computation was:
Nine quintillion (9,223,372,036,854,775,808) SHA1 computations in total
6,500 years of CPU computation to complete the attack first phase
110 years of GPU computation to complete the second phase
While those numbers seem very large, the SHA-1 shattered attack is still more than 100,000 times faster than a brute force attack which remains impractical.
Mitigating the risk of SHA-1 collision attacks
Moving forward, it’s more urgent than ever for security practitioners to migrate to safer cryptographic hashes such as SHA-256 and SHA-3. Following
Google’s vulnerability disclosure policy
, we will wait 90 days before releasing code that allows anyone to create a pair of PDFs that hash to the same SHA-1 sum given two distinct images with some pre-conditions. In order to prevent this attack from active use, we’ve added protections for Gmail and GSuite users that detects our PDF collision technique. Furthermore, we are providing a
free detection system
to the public.
You can find more details about the SHA-1 attack and detailed research outlining our techniques
About the team
This result is the product of a long-term collaboration between the CWI institute and Google’s Research security, privacy and anti-abuse group.
started collaborating on making Marc’s cryptanalytic attacks against SHA-1 practical using Google infrastructure.
developed the PDF attack,
worked on the cryptanalysis and the GPU implementation,
took care of the distributed GPU code,
Alex Petit Bianco
implemented the collision detector to protect Google users and Clement Baisse oversaw the reliability of the computations.
Another option for file sharing
February 21, 2017
Posted by Andrew Gerrand, Eric Grosse, Rob Pike, Eduardo Pinheiro and Dave Presotto, Google Software Engineers
Existing mechanisms for file sharing are so fragmented that people waste time on multi-step copying and repackaging. With the new project
, we aim to improve the situation by providing a global name space to name all your files. Given an Upspin name, a file can be shared securely, copied efficiently without "download" and "upload", and accessed by anyone with permission from anywhere with a network connection.
Our target audience is personal users, families, or groups of friends. Although Upspin might have application in enterprise environments, we think that focusing on the consumer case enables easy-to-understand and easy-to-use sharing.
File names begin with the user's email address followed by a slash-separated Unix-like path name:
Any user with appropriate permission can access the contents of this file by using Upspin services to evaluate the full path name, typically via a FUSE filesystem so that unmodified applications just work. Upspin names usually identify regular static files and directories, but may point to dynamic content generated by devices such as sensors or services.
If the user wishes to share a directory (the unit at which sharing privileges are granted), she adds a file called Access to that directory. In that file she describes the rights she wishes to grant and the users she wishes to grant them to. For instance,
read: firstname.lastname@example.org, email@example.com
allows Joe and Mae to read any of the files in the directory holding the Access file, and also in its subdirectories. As well as limiting who can fetch bytes from the server, this access is enforced end-to-end cryptographically, so cleartext only resides on Upspin clients, and use of cloud storage does not extend the trust boundary.
Upspin looks a bit like a global file system, but its real contribution is a set of interfaces, protocols, and components from which an information management system can be built, with properties such as security and access control suited to a modern, networked world. Upspin is not an "app" or a web service, but rather a suite of software components, intended to run in the network and on devices connected to it, that together provide a secure, modern information storage and sharing network. Upspin is a layer of infrastructure that other software and services can build on to facilitate secure access and sharing. This is an open source contribution, not a Google product. We have not yet integrated with the
server, though we expect to eventually, and for now use a similar technique of securely publishing all key updates. File storage is inherently an archival medium without forward secrecy; loss of the user's encryption keys implies loss of content, though we do provide for key rotation.
It’s early days, but we’re encouraged by the progress and look forward to feedback and contributions. To learn more, see the GitHub repository at
Understanding differences between corporate and consumer Gmail threats
February 16, 2017
Posted by Ali Zand and Vijay Eranti, Anti-Abuse Research and Gmail Abuse
We are constantly working to protect our users, and quickly adapt to new online threats. This work never stops: every minute, we prevent over 10 million unsafe or unwanted emails from reaching Gmail users and threatening them with malicious attachments that infect a user’s machine if opened,
asking for banking or account details, and omnipresent
. A cornerstone of our defense is understanding the pulse of the email threat landscape. This awareness helps us to anticipate and react faster to emerging attacks.
Today at RSA, we are sharing key insights about the diversity of threats to corporate Gmail inboxes. We’ve highlighted some of our key findings below; you can see our full presentation
. We’ve already incorporated these insights to help keep our G Suite users safe, and we hope that by exposing these nuances, security and abuse professionals everywhere can better understand their risk profile and customize their defenses accordingly.
How threats to corporate and consumer inboxes differ
While spam may be the most common attack across all inboxes, did you know that malware and phishing are far more likely to target corporate users? Here’s a breakdown of how attacks stack up for corporate vs. personal inboxes:
Different threats to different types of organizations
Attackers appear to choose targets based on multiple dimensions, such as the size and the type of the organization, its country of operation, and the organization’s sector of activity. Let’s look at an example of corporate users across businesses, nonprofits, government-related industries, and education services. If we consider business inboxes as a baseline, we find attackers are far more likely to target nonprofits with malware, while attackers are more likely to target businesses with phishing and spam.
These nuances go all the way down to the granularity of country and industry type. This shows how security and abuse professionals must tailor defenses based on their personalized threat model, where no single corporate user faces the same attacks.
Constant improvements to corporate Gmail protections
Research like this enables us to better protect our users. We are constantly innovating to better protect our users, and we've already implemented these findings into our G Suite protections. Additionally, we have implemented and rolled out several features that help our users stay safe against these ever-evolving threats.
The forefront of our defenses is a state-of-the-art email classifier that detects abusive
messages with 99.9% accuracy
To protect yourself from unsafe websites, make sure to heed
that alert you of potential phishing and malware attacks.
Use many layers of defense: we recommend using a
security key enforcement
(2-step verification) to thwart attackers from accessing your account in the event of a stolen password.
To ensure your email contents’ stays safe and secure in transit, use our
TLS encryption indicator
, to ensure only the intended recipient can read your email.
We will never stop working to keep our users and their inboxes secure. To learn more about how we protect Gmail, check out this YouTube video that summarizes the lessons we learned while protecting Gmail users through the years.
802.11s Security and Google Wifi
February 7, 2017
Posted by Paul Devitt, Security Engineer
Making sure your home network and information stay secure is our top priority. So when we launched the Google OnHub home router in 2015, we made sure
security was baked into its core
. In 2016 we took all we learned from OnHub and made it even better by adding mesh support with the introduction of
Secure to the core - Always
The primary mechanism to making sure your Wifi points stay safe is our verified boot mechanism. The operating system and code that your OnHub and Google Wifi run are guaranteed to have been signed by Google. Both OnHub and Google Wifi use
Coreboot and Depthcharge
from ChromeOS and ensure system integrity by implementing
from Android. To secure Userspace, we use process isolation with
and a strict set of policies.
On the software side, Google Wifi and OnHub are subject to
expansive fuzz testing
of major components and functions. The continual improvements found by fuzzing are fed into Google Wifi and OnHub, and are made available through the regular automatic updates, secured by Google’s cloud.
802.11s Security for WiFi
In 2016 with the launch of Google Wifi, we introduced
802.11s mesh technology
to the home router space. The result is a system where multiple Wifi Points work together to create blanket coverage. The specification for 802.11s recommends that appropriate security steps be taken, but doesn’t strictly define them for people to use. We spent significant time in building a security model into our implementation of 802.11s that Google WiFi and OnHub could use so that your network is always comprised of exactly the devices you expect.
As each mesh node within the network will need to speak securely to its neighboring nodes, it's imperative that a secure method, which is isolated from the user, is established to form those links. Each Wifi node establishes a separate encrypted channel with its neighbors and the primary node. On any major network topology change (such as a node being factory reset, a node added, or an event where an unexpected node joins the network), the mesh will undergo a complete cycling of the encryption keys. Each node will establish and test a new set of keys with its respective neighbors, verify that it has network connectivity and then the network as a whole will transition to the new keys.
These mesh encryption keys are generated locally on your devices and are never transmitted outside of your local network. In the event that a key has been discovered outside of your local network, a rekeying operation will be triggered. The rekeying operations allow for the mesh network to be fully flexible to the user’s desire and maintain a high level of security for devices communicating across it.
Committed to security
We have an ongoing commitment to the security of Google Wifi and OnHub. Both devices participate in the
Google Vulnerability Rewards Program (VRP)
and eligible bugs can be rewarded up to $20,000 (U.S). We’re always looking to raise the bar to help our users be secure online.
Hosted S/MIME by Google provides enhanced security for Gmail in the enterprise
February 2, 2017
Posted by Nicolas Kardas, Gmail Product Management and Nicolas Lidzborski, G Suite Security Engineering Lead
We are constantly working to meet the needs of our enterprise customers, including enhanced security for their communications. Our aim is to offer a secure method to transport sensitive information despite
insecure channels with email today
and without compromising Gmail extensive protections for spam, phishing and malware.
Why hosted S/MIME?
has been around for many years. However, its adoption has been limited because it is difficult to deploy (end users have to manually install certificates to their email applications) and the underlying email service cannot efficiently protect against spam, malware and phishing because client-side S/MIME makes the email content opaque.
With Google’s new hosted S/MIME solution, once an incoming encrypted email with S/MIME is received, it is stored using
. This means that all normal processing of the email can happen, including extensive protections for spam/phishing/malware, admin services (such as vault retention, auditing and email routing rules), and high value end user features such as mail categorization, advanced search and
. For the vast majority of emails, this is the safest solution - giving the benefit of strong authentication and encryption in transit - without losing the safety and features of Google's processing.
Using hosted S/MIME provides an added layer of security compared to using SMTP over TLS to send emails. TLS only guarantees to the sender’s service that the first hop transmission is encrypted and to the recipient that the last hop was encrypted. But in practice, emails often take many hops (through forwarders, mailing lists, relays, appliances, etc). With hosted S/MIME, the message itself is encrypted. This facilitates secure transit all the way down to the recipient’s mailbox.
S/MIME also adds verifiable account-level signatures authentication (versus only domain-based signature with DKIM). This means that email receivers can ensure that incoming email is actually from the sending account, not just a matching domain, and that the message has not been tampered with after it was sent.
How to use hosted S/MIME?
S/MIME requires every email address to have a suitable certificate attached to it. By default, Gmail requires the certificate to be from a publicly trusted root Certificate Authority (CA) which meets
strong cryptographic standards
. System administrators will have the option to lower these requirements for their domains.
To use hosted S/MIME, companies need to upload their own certificates (with private keys) to Gmail, which can be done by end users via Gmail settings or by admins in bulk via the Gmail API.
From there, using hosted S/MIME is a seamless experience for end users. When receiving a digitally signed message, Gmail automatically associates the public key with the contact of the sender. By default, Gmail automatically signs and encrypts outbound messages if there is a public S/MIME key available for the recipient. Although users have the option to manually remove encryption, admins can set up rules that override their action.
Hosted S/MIME is supported on Gmail web/iOS/Android, on Inbox and on clients connected to the Gmail service via IMAP. Users can exchange signed and encrypted emails with recipients using hosted S/MIME or client-side S/MIME.
Which companies should consider using hosted S/MIME?
Hosted S/MIME provides a solution that is easy to manage for administrators and seamless for end users. Companies that want security in transit and digital signature/non-repudiation at the account level should consider using hosted S/MIME. This is a need for many companies working with sensitive/confidential information.
Hosted S/MIME is available for
G Suite Enterprise edition
Better and more usable protection from phishing
February 1, 2017
Posted by Christiaan Brand and Guemmy Kim, Product Managers, Google Account Security
Despite constant advancements in online safety, phishing — one of the web’s oldest and simplest attacks — remains a tough challenge for the security community. Subtle tricks and good old-fashioned con-games can cause even the most security-conscious users to reveal their passwords or other personal information to fraudsters.
New advancements in phishing protection
This is why we’re excited about the
news for G Suite customers
: the launch of Security Key enforcement. Now, G Suite administrators can better protect their employees by enabling Two-Step Verification (2SV) using
Security Keys as the second factor, making this protection the norm rather than just an option. 2SV with only a Security Key offers the highest level of protection from phishing. Instead of entering a unique code as a second factor at sign-in, Security Keys send us cryptographic proof that users are on a legitimate Google site and that they have their Security Keys with them. Since most hijackers are remote, their efforts are thwarted because they cannot get physical possession of the Security Key.
Users can also take advantage of new
Bluetooth low energy (BLE) Security Key support
, which makes using 2SV Security Key protection easier on mobile devices. BLE Security Keys, which work on both Android and iOS, improve upon the usability of other form factors.
A long history of phishing protections
We’ve helped protect users from phishing for many years. We rolled out 2SV back in 2011, and later strengthened it in 2014 with the
addition of Security Keys
. These launches complement our many layers of phishing protections —
Safe Browsing warnings
Gmail spam filters
account sign-in challenges
— as well as our work with industry groups like the
to develop standards and combat phishing across the industry. In the coming months, we’ll build on these protections and offer users the opportunity to further protect their personal Google Accounts.
Vulnerability Rewards Program: 2016 Year in Review
January 30, 2017
Posted by Eduardo Vela Nava, VRP Technical Lead, Master of Disaster
We created our Vulnerability Rewards Program in 2010 because researchers should be rewarded for protecting our users. Their discoveries help keep our users, and the internet at large, as safe as possible.
The amounts we award vary, but our message to researchers does not; each one represents a sincere ‘thank you’.
As we have for
, we’re again sharing a yearly wrap-up of the Vulnerability Rewards Program.
What was new?
In short — a lot. Here’s a quick rundown:
Previously by-invitation only, we opened up
Chrome's Fuzzer Program
to submissions from the public. The program allows researchers to run
at large scale, across thousands of cores on Google hardware, and receive reward payments automatically.
On the product side, we saw amazing contributions from Android researchers all over the world, less than a year after Android launched its VRP. We also expanded our overall VRP to include more products, including OnHub and Nest devices.
We increased our presence at events around the world, like
. The vulnerabilities responsibly disclosed at these events enabled us to quickly provide fixes to the ecosystem and keep customers safe. At both events, we were able to close down a vulnerability in Chrome within days of being notified of the issue.
Stories that stood out
As always, there was no shortage of inspiring, funny, and quirky anecdotes from the 2016 year in VRP.
We met Jasminder Pal Singh at Nullcon in India. Jasminder is a long-time contributor to the VRP, but this research is a side project for him. He spends most of his time growing
Jasminder Web Services Point
, the startup he operates with six other colleagues and friends. The team consists of: two web developers, one graphic designer, a developer for Android and iOS respectively, one Linux administrator, and a Content Manager/Writer. Jasminder’s VRP rewards fund the startup. The number of reports we receive from researchers in India is growing, and we’re growing the VRP’s presence there with additional conference sponsorships, trainings, and more.
Jasminder (back right) and his team
Jon Sawyer worked with his colleague Sean Beaupre from Streamlined Mobile Solutions, and friend Ben Actis to submit three Android vulnerability reports. A resident of
Clallam County, Washington
, Jon donated their $8,000 reward to their local Special Olympics team, the Orcas. Jon told us the reward was particularly meaningful because his son, Benji, plays on the team. He said:
“Special Olympics provides a sense of community, accomplishment, and free health services at meets. They do incredible things for these people, at no cost for the athletes or their parents. Our donation is going to supply them with new properly fitting uniforms, new equipment, cover some facility rental fees (bowling alley, gym, track, swimming pool) and most importantly help cover the biggest cost, transportation.”
VRP researchers sometimes attach videos that demonstrate the bug. While making a great proof-of-concept video is a skill in itself, our researchers raised it to another level this year. Check out this video Frans Rosén sent us. It’s perfectly synchronized to the background music! We hope this trend continues in 2017 ;-)
Researchers’ individual contributions, and our relationship with the community, have never been more important. A hearty thank you to everyone that contributed to the VRP in 2016 — we’re excited to work with you (and others!) in 2017 and beyond.
*Josh Armour (
VRP Program Manager
), Andrew Whalley (
), and Quan To (
) contributed mightily to help lead these Google-wide efforts.
The foundation of a more secure web
January 26, 2017
Posted by Ryan Hurst, Security and Privacy Engineering
In support of our work to implement HTTPS across all of our products (
) we have been operating our own subordinate Certificate Authority (GIAG2), issued by a third-party. This has been a key element enabling us to more rapidly handle the SSL/TLS certificate needs of Google products.
As we look forward to the evolution of both the web and our own products it is clear HTTPS will continue to be a foundational technology. This is why we have made the decision to expand our current Certificate Authority efforts to include the operation of our own Root Certificate Authority. To this end, we have established Google Trust Services (
/), the entity we will rely on to operate these Certificate Authorities on behalf of Google and Alphabet.
The process of embedding Root Certificates into products and waiting for the associated versions of those products to be broadly deployed can take time. For this reason we have also purchased two existing Root Certificate Authorities, GlobalSign R2 and R4. These Root Certificates will enable us to begin independent certificate issuance sooner rather than later.
We intend to continue the operation of our existing GIAG2 subordinate Certificate Authority. This change will enable us to begin the process of migrating to our new, independent infrastructure.
Google Trust Services now operates the following Root Certificates:
Due to timing issues involved in establishing an independently trusted Root Certificate Authority, we have also secured the option to cross sign our CAs using:
If you are building products that intend to connect to a Google property moving forward you need to at minimum include the above Root Certificates. With that said even though we now operate our own roots, we may still choose to operate subordinate CAs under third-party operated roots.
For this reason if you are developing code intended to connect to a Google property, we still recommend you include a wide set of trustworthy roots. Google maintains a sample PEM file at (
) which is periodically updated to include the Google Trust Services owned and operated roots as well as other roots that may be necessary now, or in the future to communicate with and use Google Products and Services.
App Security Improvements: Looking back at 2016
January 19, 2017
Posted by Rahul Mishra, Android Security Program Manager
[Cross-posted from the
Android Developers Blog
In April 2016, the Android Security team described how the Google Play App Security Improvement (ASI) program has helped developers
fix security issues in 100,000 applications
. Since then, we have detected and notified developers of 11 new security issues and provided developers with resources and guidance to update their apps. Because of this, over 90,000 developers have updated over 275,000 apps!
ASI now notifies developers of 26 potential security issues. To make this process more transparent, we introduced
a new page
where developers can find information about all these security issues in one place. This page includes links to help center articles containing instructions and additional support contacts. Developers can use this page as a resource to learn about new issues and keep track of all past issues.
Developers can also refer to our
security best practices documents
, which are aimed at improving the understanding of general security concepts and providing examples that can help tackle app-specific issues.
How you can help:
For feedback or questions, please reach out to us through the
Google PlayDeveloper Help Center
Silence speaks louder than words when finding malware
January 17, 2017
Posted by Megan Ruthven, Software Engineer
[Cross-posted from the
Android Developers Blog
In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is
. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.
But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our
security update system
This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps
To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate.
Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.
N = Number of devices that downloaded the app.
x = Number of retained devices that downloaded the app.
p = Probability of a device downloading any app will be retained.
In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.
Difference between a regular and DOI app download on the same device.
Results in the wild
Among others, the DOI score flagged many apps in three well known malware families—
. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out
Give us feedback in our
Official Google Blog
Public Policy Blog
Lat Long Blog
Ads Developer Blog
Android Developers Blog