The latest news and insights from Google on security and safety on the Internet
Next Steps Toward More Connection Security
April 27, 2017
Posted by Emily Schechter, Chrome Security Team
In January, we
began our quest
to improve how Chrome communicates the connection security of HTTP pages. Chrome now marks HTTP pages as “Not secure” if they have password or credit card fields. Beginning in October 2017, Chrome will show the “Not secure” warning in two additional situations: when users enter data on an HTTP page, and on all HTTP pages visited in
Treatment of HTTP pages in Chrome 62
to label HTTP sites as non-secure is taking place in gradual steps, based on increasingly broad criteria. Since the
change in Chrome 56
, there has been a 23% reduction in the fraction of navigations to HTTP pages with password or credit card forms on desktop, and we’re ready to take the next steps.
Passwords and credit cards are not the only types of data that should be private. Any type of data that users type into websites should not be accessible to others on the network, so starting in version 62 Chrome will show the “Not secure” warning when users type data into HTTP sites.
Treatment of HTTP pages with user-entered data in Chrome 62
When users browse Chrome with Incognito mode, they likely have increased expectations of privacy. However, HTTP browsing is not private to others on the network, so in version 62 Chrome will also warn users when visiting an HTTP page in Incognito mode.
Eventually, we plan to show the “Not secure” warning for all HTTP pages, even outside Incognito mode. We will publish updates as we approach future releases, but don’t wait to get started moving to HTTPS! HTTPS is
easier and cheaper than ever before
, and it enables both the best performance the web offers and powerful new features that are too sensitive for HTTP. Check out our
to get started.
New Research: Keeping fake listings off Google Maps
April 6, 2017
Posted by Doug Grundman, Maps Anti-Abuse, and Kurt Thomas, Security & Anti-Abuse Research
Google My Business
enables millions of business owners to create listings and share information about their business on Google Maps and Search, making sure everything is up-to-date and accurate for their customers. Unfortunately, some actors attempt to abuse this service to register fake listings in order to defraud legitimate business owners, or to
charge exorbitant service fees for services
Over a year ago, we teamed up with the University of California, San Diego to research the actors behind fake listings, in order to improve our products and keep our users safe. The full report,
“Pinning Down Abuse on Google Maps”
, will be presented tomorrow at the 2017
International World Wide Web Conference
Our study shows that fewer than 0.5% of local searches lead to fake listings. We’ve also improved how we verify new businesses, which has reduced the number of fake listings by 70% from its all-time peak back in June 2015.
What is a fake listing?
For over a year, we tracked the bad actors behind fake listings. Unlike email-based scams
selling knock-off products online
, local listing scams require physical proximity to potential victims. This fundamentally changes both the scale and types of abuse possible.
Bad actors posing as locksmiths, plumbers, electricians, and other contractors were the most common source of abuse—roughly 2 out of 5 fake listings. The actors operating these fake listings would cycle through non-existent postal addresses and disposable VoIP phone numbers even as their listings were discovered and disabled. The purported addresses for these businesses were irrelevant as the contractors would travel directly to potential victims.
Another 1 in 10 fake listings belonged to real businesses that bad actors had improperly claimed ownership over, such as hotels and restaurants. While making a reservation or ordering a meal was indistinguishable from the real thing, behind the scenes, the bad actors would deceive the actual business into paying referral fees for organic interest.
How does Google My Business verify information?
Google My Business currently verifies the information provided by business owners before making it available to users. For freshly created listings, we physically mail a postcard to the new listings’ address to ensure the location really exists. For businesses changing owners, we make an automated call to the listing’s phone number to verify the change.
Unfortunately, our research showed that these processes can be abused to get fake listings on Google Maps. Fake contractors would request hundreds of postcard verifications to non-existent suites at a single address, such as 123 Main St #456 and 123 Main St #789, or to stores that provided PO boxes. Alternatively, a phishing attack could maliciously repurpose freshly verified business listings by tricking the legitimate owner into sharing verification information sent either by phone or postcard.
Keeping deceptive businesses out — by the numbers
Leveraging our study’s findings, we’ve made significant changes to how we verify addresses and are even
piloting an advanced verification process
for locksmiths and plumbers. Improvements we’ve made include prohibiting bulk registrations at most addresses, preventing businesses from relocating impossibly far from their original address without additional verification, and detecting and ignoring intentionally mangled text in address fields designed to confuse our algorithms. We have also adapted our anti-spam machine learning systems to detect data discrepancies common to fake or deceptive listings.
Combined, here’s how these defenses stack up:
We detect and disable 85% of fake listings before they even appear on Google Maps.
We’ve reduced the number of abusive listings by 70% from its peak back in June 2015.
We’ve also reduced the number of impressions to abusive listings by 70%.
As we’ve shown, verifying local information comes with a number of unique anti-abuse challenges. While fake listings may slip through our defenses from time to time, we are constantly improving our systems to better serve both users and business owners.
An Investigation of Chrysaor Malware on Android
April 3, 2017
Posted by Rich Cannings, Jason Woloz, Neel Mehta, Ken Bodzak, Wentao Chang, Megan Ruthven
Google is constantly working to improve our systems that protect users from
Potentially Harmful Applications
(PHAs). Usually, PHA authors attempt to install their harmful apps on as many devices as possible. However, a few PHA authors spend substantial effort, time, and money to create and install their harmful app on one or a very small number of devices. This is known as a
In this blog post, we describe Chrysaor, a newly discovered family of spyware that was used in a targeted attack on a small number of Android devices, and how investigations like this help Google protect Android users from a variety of threats.
What is Chrysaor?
Chrysaor is spyware believed to be created by
NSO Group Technologies
, specializing in the creation and sale of software and infrastructure for targeted attacks. Chrysaor is believed to be related to the Pegasus spyware that was
first identified on iOS
and analyzed by
Late last year, after receiving a list of suspicious package names from Lookout, we discovered that a few dozen Android devices may have installed an application related to Pegasus, which we named Chrysaor. Although the applications were never available in Google Play, we immediately identified the scope of the problem by using
. We gathered information from affected devices, and concurrently, attempted to acquire Chrysaor apps to better understand its impact on users. We’ve contacted the potentially affected users, disabled the applications on affected devices, and implemented changes in Verify Apps to protect all users.
What is the scope of Chrysaor?
Chrysaor was never available in Google Play and had a very low volume of installs outside of Google Play. Among the over 1.4 billion devices protected by Verify Apps, we observed fewer than 3 dozen installs of Chrysaor on victim devices. These devices were located in the following countries:
How we protect you
To protect Android devices and users, Google Play provides a complete set of security services that update outside of platform releases. Users don’t have to install any additional security services to keep their devices safe. In 2016, these services protected over 1.4 billion devices, making Google one of the largest providers of on-device security services in the world:
using people, systems in the cloud, and data sent to us from devices
Warn users about or blocking users from installing PHAs
Continually scan devices for PHAs and other harmful threats
Additionally, we are providing detailed technical information to help the security industry in our collective work against PHAs.
What do I need to do?
It is extremely unlikely you or someone you know was affected by Chrysaor malware. Through our investigation, we identified less than 3 dozen devices affected by Chrysaor, we have disabled Chrysaor on those devices, and we have notified users of all known affected devices. Additionally, the improvements we made to our protections have been enabled for all users of our security services.
To ensure you are fully protected against PHAs and other threats, we recommend these 5 basic steps:
Install apps only from reputable sources:
Install apps from a reputable source, such as
. No Chrysaor apps were on Google Play.
Enable a secure lock screen
Pick a PIN, pattern, or password that is easy for you to remember and hard for others to guess.
Update your device
Keep your device up-to-date with the latest security patches.
Ensure Verify Apps is enabled.
Locate your device:
Practice finding your device with
Android Device Manager
because you are far more likely to lose your device than install a PHA.
How does Chrysaor work?
To install Chrysaor, we believe an attacker coaxed specifically targeted individuals to download the malicious software onto their device. Once Chrysaor is installed, a remote operator is able to surveil the victim’s activities on the device and within the vicinity, leveraging microphone, camera, data collection, and logging and tracking application activities on communication apps such as phone and SMS.
One representative sample Chrysaor app that we analyzed was tailored to devices running Jellybean (4.3) or earlier. The following is a review of scope and impact of the Chrysaor app named com.network.android tailored for a Samsung device target, with SHA256 digest:
Upon installation, the app uses known framaroot exploits to escalate privileges and break Android’s application sandbox. If the targeted device is not vulnerable to these exploits, then the app attempts to use a superuser binary pre-positioned at /system/csk to elevate privileges.
After escalating privileges, the app immediately protects itself and starts to collect data, by:
Installing itself on the /system partition to persist across factory resets
Removing Samsung’s system update app (com.sec.android.fotaclient) and disabling auto-updates to maintain persistence (sets Settings.System.SOFTWARE_UPDATE_AUTO_UPDATE to 0)
Deleting WAP push messages and changing WAP message settings, possibly for anti-forensic purpose.
Starting content observers and the main task loop to receive remote commands and exfiltrate data.
The app uses six techniques to collect user data:
use alarms to periodically repeat actions on the device to expose data, including gathering location data.
dump all existing content on the device into a queue. Data collectors are used in conjunction with repeated commands to collect user data including, SMS settings, SMS messages, Call logs, Browser History, Calendar, Contacts, Emails, and messages from selected messaging apps, including WhatsApp, Twitter, Facebook, Kakoa, Viber, and Skype by making /data/data directories of the apps world readable.
framework to gather changes in SMS, Calendar, Contacts, Cell info, Email, WhatsApp, Facebook, Twitter, Kakao, Viber, and Skype.
captures an image of the current screen via the raw frame buffer.
record input events by hooking
IPCThreadState::Transact from /system/lib/libbinder.so, and intercepting android::parcel with the interface com.android.internal.view.IInputContext.
silently answers a telephone call and stays connected in the background, allowing the caller to hear conversations within the range of the phone's microphone. If the user unlocks their device, they will see a black screen while the app drops the call, resets call settings and prepares for the user to interact with the device normally.
Finally, the app can remove itself through three ways:
Via a command from the server
Autoremove if the device has not been able to check in to the server after 60 days
Via an antidote file. If /sdcard/MemosForNotes was present on the device, the Chrysaor app removes itself from the device.
Samples uploaded to VirusTotal
To encourage further research in the security community, we’ve uploaded these sample Chrysaor apps to Virus Total.
Additional digests with links to Chrysaor
As a result of our investigation we have identified these additional Chrysaor-related apps.
Lookout has completed their own independent analysis of the samples we acquired, their report can be viewed
Updates to the Google Safe Browsing’s Site Status Tool
March 29, 2017
Posted Deeksha Padma Prasad and Allison Miller, Safe Browsing
Google Safe Browsing
gives users tools to help protect themselves from web-based threats like malware, unwanted software, and social engineering. We are best known for our warnings, which users see when they attempt to navigate to dangerous sites or download dangerous files. We also provide other tools, like the
Site Status Tool
, where people can check the current safety status of a web page (without having to visit it).
We host this tool within Google’s
Safe Browsing Transparency Report
. As with other sections in Google’s
we make this data available to give the public more visibility into the security and health of the online ecosystem. Users of the
Site Status Tool
input a webpage (as a URL, website, or domain) into the tool, and the most recent results of the Safe Browsing analysis for that webpage are returned...plus references to troubleshooting help and educational materials.
We’ve just launched a new version of the
Site Status Tool
that provides simpler, clearer results and is better designed for the primary users of the page: people who are visiting the tool from a Safe Browsing warning they’ve received, or doing casual research on Google’s malware and phishing detection. The tool now features a cleaner UI, easier-to-interpret language, and more precise results. We’ve also moved some of the more technical data on associated ASes (autonomous systems) over to the
malware dashboard section of the report
While the interface has been streamlined, additional diagnostic information is not gone: researchers who wish to find more details can drill-down elsewhere in
Safe Browsing’s Transparency Report
, while site-owners can find additional diagnostic information in
. One of the goals of the Transparency Report is to shed light on complex policy and security issues, so, we hope the design adjustments will indeed provide our users with additional clarity.
Reassuring our users about government-backed attack warnings
March 24, 2017
Posted by Shane Huntley, Google Threat Analysis Group
, we’ve warned our users if we believe their Google accounts are being targeted by government-backed attackers.
We send these out of an abundance of caution — the notice does not necessarily mean that the account has been compromised or that there is a widespread attack. Rather, the notice reflects our assessment that a government-backed attacker has likely attempted to access the user’s account or computer through phishing or malware, for example. You can read more about these warnings
In order to secure some of the details of our detection, we often send a batch of warnings to groups of at-risk users at the same time, and not necessarily in real-time. Additionally, we never indicate which government-backed attackers we think are responsible for the attempts; different users may be targeted by different attackers.
Security has always been a top priority for us. Robust, automated protections help prevent scammers from signing into your Google account,
Gmail always uses an encrypted connection
when you receive or send email, we filter more than
99.9% of spam
— a common source of phishing messages — from Gmail, and we show users when messages are from an
unverified or unencrypted source
An extremely small fraction of users will ever see one of these warnings, but if you receive this warning from us, it's important to
take action on it
. You can always take a two-minute
, and for
maximum protection from phishing
, enable two-step verification with a Security Key.
Diverse protections for a diverse ecosystem: Android Security 2016 Year in Review
March 22, 2017
Posted by Adrian Ludwig & Mel Miller, Android Security Team
Today, we’re sharing the third annual Android Security Year In Review, a comprehensive look at our work to protect more than 1.4 billion Android users and their data.
Our goal is simple: keep users safe. In 2016, we improved our abilities to stop dangerous apps, built new security features into Android 7.0 Nougat, and collaborated with device manufacturers, researchers, and other members of the Android ecosystem. For more details, you can read the
full Year in Review report
or watch our
Protecting users from PHAs
It’s critical to keep people safe from
Potentially Harmful Apps (PHAs)
that may put their data or devices at risk. Our ongoing work in this area requires us to find ways to track and stop existing PHAs, and anticipate new ones that haven’t even emerged yet.
Over the years, we’ve built a variety of systems to address these threats, such as application analyzers that constantly review apps for unsafe behavior, and Verify Apps which regularly checks users’ devices for PHAs. When these systems detect PHAs, we warn users, suggest they think twice about downloading a particular app, or even remove the app from their devices entirely.
We constantly monitor threats and improve our systems over time. Last year’s data reflected those improvements: Verify Apps conducted 750 million daily checks in 2016, up from 450 million the previous year, enabling us to reduce the PHA installation rate in the top 50 countries for Android usage.
Google Play continues to be the safest place for Android users to download their apps. Installs of PHAs from Google Play decreased in nearly every category:
Now 0.016 percent of installs, trojans dropped by 51.5 percent compared to 2015
Now 0.003 percent of installs, hostile downloaders dropped by 54.6 percent compared to 2015
Now 0.003 percent of installs, backdoors dropped by 30.5 percent compared to 2015
Now 0.0018 percent of installs, phishing apps dropped by 73.4 percent compared to 2015
By the end of 2016, only 0.05 percent of devices that downloaded apps exclusively from Play contained a PHA; down from 0.15 percent in 2015.
Still, there’s more work to do for devices overall, especially those that install apps from multiple sources. While only 0.71 percent of all Android devices had PHAs installed at the end of 2016, that was a slight increase from about 0.5 percent in the beginning of 2015. Using improved tools and the knowledge we gained in 2016, we think we can reduce the number of devices affected by PHAs in 2017, no matter where people get their apps.
New security protections in Nougat
Last year, we introduced a
variety of new protections in Nougat
, and continued our ongoing work to
strengthen the security of the Linux Kernel
: In Nougat, we introduced file-based encryption which enables each user profile on a single device to be encrypted with a unique key. If you have personal and work accounts on the same device, for example, the key from one account can’t unlock data from the other. More broadly, encryption of user data has been required for capable Android devices since in late 2014, and we now see that feature enabled on over 80 percent of Android Nougat devices.
New audio and video protections
: We did significant work to
improve security and re-architect
how Android handles video and audio media. One example: we now store different media components into individual sandboxes, where previously they lived together. Now, if one component is compromised, it doesn’t automatically have permissions to other components, which helps contain any additional issues.
Even more security for enterprise users
: We introduced a
variety of new enterprise security features
including “Always On” VPN, which protects your data from the moment your device boots up and ensures it isn't traveling from a work phone to your personal device via an insecure connection. We also added security policy transparency, process logging, improved wifi certification handling, and client certification improvements to our
growing set of enterprise tools
Working together to secure the Android ecosystem.
Sharing information about security threats between Google, device manufacturers, the research community, and others helps keep all Android users safer. In 2016, our biggest collaborations were via our monthly security updates program and ongoing partnership with the security research community.
Security updates are regularly highlighted as a pillar of mobile security—and rightly so. We
launched our monthly security updates program
in 2015, following the public disclosure of a bug in Stagefright, to help accelerate patching security vulnerabilities across devices from many different device makers. This program expanded significantly in 2016:
More than 735 million devices from 200+ manufacturers received a platform security update in 2016.
We released monthly Android security updates throughout the year for devices running Android 4.4.4 and up—that accounts for 86.3 percent of all active Android devices worldwide.
Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016.
We provided monthly security updates for all supported Pixel and Nexus devices throughout 2016, and we’re thrilled to see our partners invest significantly in regular updates as well. There’s still a lot of room for improvement, however. About half of devices in use at the end of 2016 had not received a platform security update in the previous year. We’re working to increase device security updates by streamlining our security update program to make it easier for manufacturers to deploy security patches and releasing
to make it easier for users to apply those patches.
On the research side, our Android Security Rewards program grew rapidly: we
paid researchers nearly $1 million dollars
for their reports in 2016. In parallel, we worked closely with various security firms to identify and quickly fix issues that may have posed risks to our users.
We appreciate all of the hard work by Android partners, external researchers, and teams at Google that led to the progress the ecosystem has made with security in 2016. But it doesn’t stop there. Keeping users safe requires constant vigilance and effort. We’re looking forward to new insights and progress in 2017 and beyond.
Detecting and eliminating Chamois, a fraud botnet on Android
March 13, 2017
Posted by Security Software Engineers—Bernhard Grill, Megan Ruthven, and Xin Zhao
Google works hard to protect users across a variety of devices and environments. Part of this work involves defending users against
Potentially Harmful Applications
(PHAs), an effort that gives us the opportunity to observe various types of threats targeting our ecosystem. For example, our security teams recently discovered and defended users of our ads and Android systems against a new PHA family we've named Chamois.
Chamois is an Android PHA family capable of:
Generating invalid traffic
through ad pop ups having deceptive graphics inside the ad
artificial app promotion
by automatically installing apps in the background
premium text messages
Downloading and executing additional plugins
Interference with the ads ecosystem
We detected Chamois during a routine ad traffic quality evaluation. We analyzed malicious apps based on Chamois, and found that they employed several methods to avoid detection and tried to trick users into clicking ads by displaying deceptive graphics. This sometimes resulted in downloading of other apps that commit SMS fraud. So we blocked the Chamois app family using
and also kicked out bad actors who were trying to game our ad systems.
Our previous experience with ad fraud apps like this one enabled our teams to swiftly take action to protect both our advertisers and Android users. Because the malicious app didn't appear in the device's app list, most users wouldn't have seen or known to uninstall the unwanted app. This is why Google's
is so valuable, as it helps users discover PHAs and delete them.
Under Chamois's hood
Chamois was one of the largest PHA families seen on Android to date and distributed through multiple channels. To the best of our knowledge Google is the first to publicly identify and track Chamois.
Chamois had a number of features that made it unusual, including:
: Its code is executed in 4 distinct stages using different file formats, as outlined in this diagram.
This multi-stage process makes it more complicated to immediately identify apps in this family as a PHA because the layers have to be peeled first to reach the malicious part. However, Google's pipelines weren't tricked as they are designed to tackle these scenarios properly.
: Chamois tried to evade detection using obfuscation and anti-analysis techniques, but our systems were able to counter them and detect the apps accordingly.
Custom encrypted storage
: The family uses a custom, encrypted file storage for its configuration files and additional code that required deeper analysis to understand the PHA.
: Our security teams sifted through more than 100K lines of sophisticated code written by seemingly professional developers. Due to the sheer size of the APK, it took some time to understand Chamois in detail.
Google's approach to fighting PHAs
Verify Apps protects users from known PHAs by warning them when they are downloading an app that is determined to be a PHA, and it also enables users to uninstall the app if it has already been installed. Additionally, Verify Apps monitors the state of the Android ecosystem for anomalies and investigates the ones that it finds. It also helps finding unknown PHAs through behavior analysis on devices. For example, many apps downloaded by Chamois were highly ranked by the
. We have implemented rules in Verify Apps to protect users against Chamois.
Google continues to significantly invest in its counter-abuse technologies for Android and its ad systems, and we're proud of the work that many teams do behind the scenes to fight PHAs like Chamois.
We hope this summary provides insight into the growing complexity of Android botnets. To learn more about Google's anti-PHA efforts and further ameliorate the risks they pose to users, devices, and ad systems, keep an eye open for the upcoming "Android Security 2016 Year In Review" report.
VRP news from Nullcon
March 2, 2017
Posted by Josh Armour, Security Program Manager
We’re thrilled to be joining the security research community at
this week in Goa, India. This is a hugely important event for the
Google Vulnerability Rewards Program
and for our work with the security research community, more broadly. To mark the occasion, we wanted to share a few updates about the VRP.
Tougher bugs, bigger rewards
Since the launch of our program in 2010, Google has offered a range of rewards: from $100 USD for low severity issues, up to $20,000 USD for critical vulnerabilities in our web properties (see
rewards). But, because high severity vulnerabilities have become harder to identify over the years, researchers have needed more time to find them. We want to demonstrate our appreciation for the significant time researchers dedicate to our program, and so we’re making some changes to our VRP.
Starting today we will be increasing the reward for “Remote Code Execution” on the Google VRP from $20,000 USD to $31,337 USD. We are increasing the reward for “Unrestricted file system or database access” from $10,000 USD to $13,337 USD as well. Please check out the
for more details and specifics.
Also, we are now donating rewards attributed to reports generated from our internal web security scanner; we have donated over $8000 to
this year so far.
Cloud Security Scanner
allows App Engine customers to utilize a version of the same tool.
Growing the security research community in India
2016’s VRP Year in Review
, we featured Jasminder Pal Singh, a longtime contributor who uses rewards to fund his startup, Jasminder Web Services Point. He’s emblematic of the vibrant and fast-growing computer security research community in India. We saw that new momentum reflected in last year’s VRP data: India was surpassed only by two other locations in terms of total individual researchers paid. We received reports from ~40% more Indian researchers (as compared to 2015) and gave out 30% more rewards which almost tripled the total, and doubled the average payout (both per researcher and per reward). We are excited to see this growth as all users of Google’s products benefit.
Globally, we’ve noticed other
. Russia has consistently occupied a position in the top 10 every year the last 7 years. We have noticed a 3X increase in reports from Asia, making up 70% of the Android Security Rewards for 2016. We have seen increases in the number of researchers reporting valid bugs from Germany (27%), and France (44%). France broke into our top 5 countries in 2016 for the first time.
In 2016, we delivered technical talks along with educational trainings to an audience of enthusiastic security professionals in Goa at the Nullcon security conference. This year, we continue our investment at Nullcon to deliver
focused on the growing group of bug hunters we see in India. If you are attending Nullcon please stop by and say “Hello”!
Expanding protection for Chrome users on macOS
March 1, 2017
Posted by Kylie McRoberts and Ryan Rasti
is broadening its protection of macOS devices, enabling safer browsing experiences by improving defenses against unwanted software and malware targeting macOS. As a result, macOS users may start seeing more warnings when they navigate to dangerous sites or download dangerous files (example warning below).
As part of this next step towards reducing macOS-specific malware and unwanted software, Safe Browsing is focusing on two common abuses of browsing experiences: unwanted ad injection, and manipulation of Chrome user settings, specifically the start page, home page, and default search engine. Users deserve full control of their browsing experience and
Unwanted Software Policy
violations hurt that experience.
The recently released
Chrome Settings API for Mac
gives developers the tools to make sure users stay in control of their Chrome settings. From here on, the Settings Overrides API will be the only approved path for making changes to Chrome settings on Mac OSX, like it currently is on Windows. Also, developers should know that only extensions hosted in the Chrome Web Store are allowed to make changes to Chrome settings.
Starting March 31 2017, Chrome and Safe Browsing will warn users about software that attempts to modify Chrome settings without using the API.
For more information about the criteria we use to guide our efforts to protect Safe Browsing’s users, please visit our
malware and unwanted software help center
E2EMail research project has left the nest
February 24, 2017
Posted by KB Sriram, Eduardo Vela Nava, and Stephan Somogyi, Security and Privacy Engineering
Whether they’re concerned about insider risks, compelled data disclosure demands, or other perceived dangers, some people prudently use end-to-end email encryption to limit the scope of systems they have to trust. The best-known method, PGP, has long been available in command-line form, as a plug-in for IMAP-based email clients, and it clumsily interoperates with Gmail by cut-and-paste. All these scenarios have demonstrated over 25 years that it’s too hard to use. Chromebook users also have never had a good solution; choosing between strong crypto and a strong endpoint device is unsatisfactory.
These are some of the reasons we’ve continued working on the
End-To-End research effort
. One of the things we’ve done over the past year is add the resulting
code to GitHub: E2EMail is not a Google product, it’s now a fully community-driven open source project, to which passionate security engineers from across the industry have already contributed.
E2EMail offers one approach to integrating OpenPGP into Gmail via a Chrome Extension, with improved usability, and while carefully keeping all cleartext of the message body exclusively on the client. E2EMail is built on a proven, open source
library developed at Google.
E2EMail in its current incarnation uses a bare-bones central keyserver for testing, but the recent
Key Transparency announcement
is crucial to its further evolution. Key discovery and distribution lie at the heart of the usability challenges that OpenPGP implementations have faced. Key Transparency delivers a solid, scalable, and thus practical solution, replacing the problematic
model traditionally used with PGP.
We look forward to working alongside the community to integrate E2EMail with the Key Transparency server, and beyond. If you’re interested in delving deeper, check out the
repository on GitHub.
Announcing the first SHA1 collision
February 23, 2017
Posted by Marc Stevens (CWI Amsterdam), Elie Bursztein (Google), Pierre Karpman (CWI Amsterdam), Ange Albertini (Google), Yarik Markov (Google), Alex Petit Bianco (Google), Clement Baisse (Google)
Cryptographic hash functions like SHA-1 are a cryptographer’s swiss army knife. You’ll find that hashes play a role in browser security, managing code repositories, or even just detecting duplicate files in storage. Hash functions compress large amounts of data into a small message digest. As a cryptographic requirement for wide-spread use, finding two messages that lead to the same digest should be computationally infeasible. Over time however, this requirement can fail due to
attacks on the mathematical underpinnings
of hash functions or to increases in computational power.
Today, more than 20 years after of SHA-1 was first introduced, we are announcing the first practical technique for generating a collision. This represents the culmination of two years of research that sprung from a collaboration between the
CWI Institute in Amsterdam
and Google. We’ve summarized how we went about generating a collision below. As a proof of the attack, we are
releasing two PDFs
that have identical SHA-1 hashes but different content.
For the tech community, our findings emphasize the necessity of sunsetting SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. As early as 2014, the Chrome team
that they would gradually phase out using SHA-1. We hope our practical attack on SHA-1 will cement that the protocol should no longer be considered secure.
We hope that our practical attack against SHA-1 will finally convince the industry that it is urgent to move to safer alternatives such as SHA-256.
What is a cryptographic hash collision?
A collision occurs when two distinct pieces of data—a document, a binary, or a website’s certificate—hash to the same digest as shown above. In practice, collisions should never occur for secure hash functions. However if the hash algorithm has some flaws, as SHA-1 does, a well-funded attacker can craft a collision. The attacker could then use this collision to deceive systems that rely on hashes into accepting a malicious file in place of its benign counterpart. For example, two insurance contracts with drastically different terms.
Finding the SHA-1 collision
published a paper that outlined a theoretical approach to create a SHA-1 collision. We started by creating a
specifically crafted to allow us to generate two documents with arbitrary distinct visual contents, but that would hash to the same SHA-1 digest. In building this theoretical attack in practice we had to overcome some new challenges. We then leveraged Google’s technical expertise and cloud infrastructure to compute the collision which is one of the largest computations ever completed.
Here are some numbers that give a sense of how large scale this computation was:
Nine quintillion (9,223,372,036,854,775,808) SHA1 computations in total
6,500 years of CPU computation to complete the attack first phase
110 years of GPU computation to complete the second phase
While those numbers seem very large, the SHA-1 shattered attack is still more than 100,000 times faster than a brute force attack which remains impractical.
Mitigating the risk of SHA-1 collision attacks
Moving forward, it’s more urgent than ever for security practitioners to migrate to safer cryptographic hashes such as SHA-256 and SHA-3. Following
Google’s vulnerability disclosure policy
, we will wait 90 days before releasing code that allows anyone to create a pair of PDFs that hash to the same SHA-1 sum given two distinct images with some pre-conditions. In order to prevent this attack from active use, we’ve added protections for Gmail and GSuite users that detects our PDF collision technique. Furthermore, we are providing a
free detection system
to the public.
You can find more details about the SHA-1 attack and detailed research outlining our techniques
About the team
This result is the product of a long-term collaboration between the CWI institute and Google’s Research security, privacy and anti-abuse group.
started collaborating on making Marc’s cryptanalytic attacks against SHA-1 practical using Google infrastructure.
developed the PDF attack,
worked on the cryptanalysis and the GPU implementation,
took care of the distributed GPU code,
Alex Petit Bianco
implemented the collision detector to protect Google users and Clement Baisse oversaw the reliability of the computations.
Another option for file sharing
February 21, 2017
Posted by Andrew Gerrand, Eric Grosse, Rob Pike, Eduardo Pinheiro and Dave Presotto, Google Software Engineers
Existing mechanisms for file sharing are so fragmented that people waste time on multi-step copying and repackaging. With the new project
, we aim to improve the situation by providing a global name space to name all your files. Given an Upspin name, a file can be shared securely, copied efficiently without "download" and "upload", and accessed by anyone with permission from anywhere with a network connection.
Our target audience is personal users, families, or groups of friends. Although Upspin might have application in enterprise environments, we think that focusing on the consumer case enables easy-to-understand and easy-to-use sharing.
File names begin with the user's email address followed by a slash-separated Unix-like path name:
Any user with appropriate permission can access the contents of this file by using Upspin services to evaluate the full path name, typically via a FUSE filesystem so that unmodified applications just work. Upspin names usually identify regular static files and directories, but may point to dynamic content generated by devices such as sensors or services.
If the user wishes to share a directory (the unit at which sharing privileges are granted), she adds a file called Access to that directory. In that file she describes the rights she wishes to grant and the users she wishes to grant them to. For instance,
read: firstname.lastname@example.org, email@example.com
allows Joe and Mae to read any of the files in the directory holding the Access file, and also in its subdirectories. As well as limiting who can fetch bytes from the server, this access is enforced end-to-end cryptographically, so cleartext only resides on Upspin clients, and use of cloud storage does not extend the trust boundary.
Upspin looks a bit like a global file system, but its real contribution is a set of interfaces, protocols, and components from which an information management system can be built, with properties such as security and access control suited to a modern, networked world. Upspin is not an "app" or a web service, but rather a suite of software components, intended to run in the network and on devices connected to it, that together provide a secure, modern information storage and sharing network. Upspin is a layer of infrastructure that other software and services can build on to facilitate secure access and sharing. This is an open source contribution, not a Google product. We have not yet integrated with the
server, though we expect to eventually, and for now use a similar technique of securely publishing all key updates. File storage is inherently an archival medium without forward secrecy; loss of the user's encryption keys implies loss of content, though we do provide for key rotation.
It’s early days, but we’re encouraged by the progress and look forward to feedback and contributions. To learn more, see the GitHub repository at
Understanding differences between corporate and consumer Gmail threats
February 16, 2017
Posted by Ali Zand and Vijay Eranti, Anti-Abuse Research and Gmail Abuse
We are constantly working to protect our users, and quickly adapt to new online threats. This work never stops: every minute, we prevent over 10 million unsafe or unwanted emails from reaching Gmail users and threatening them with malicious attachments that infect a user’s machine if opened,
asking for banking or account details, and omnipresent
. A cornerstone of our defense is understanding the pulse of the email threat landscape. This awareness helps us to anticipate and react faster to emerging attacks.
Today at RSA, we are sharing key insights about the diversity of threats to corporate Gmail inboxes. We’ve highlighted some of our key findings below; you can see our full presentation
. We’ve already incorporated these insights to help keep our G Suite users safe, and we hope that by exposing these nuances, security and abuse professionals everywhere can better understand their risk profile and customize their defenses accordingly.
How threats to corporate and consumer inboxes differ
While spam may be the most common attack across all inboxes, did you know that malware and phishing are far more likely to target corporate users? Here’s a breakdown of how attacks stack up for corporate vs. personal inboxes:
Different threats to different types of organizations
Attackers appear to choose targets based on multiple dimensions, such as the size and the type of the organization, its country of operation, and the organization’s sector of activity. Let’s look at an example of corporate users across businesses, nonprofits, government-related industries, and education services. If we consider business inboxes as a baseline, we find attackers are far more likely to target nonprofits with malware, while attackers are more likely to target businesses with phishing and spam.
These nuances go all the way down to the granularity of country and industry type. This shows how security and abuse professionals must tailor defenses based on their personalized threat model, where no single corporate user faces the same attacks.
Constant improvements to corporate Gmail protections
Research like this enables us to better protect our users. We are constantly innovating to better protect our users, and we've already implemented these findings into our G Suite protections. Additionally, we have implemented and rolled out several features that help our users stay safe against these ever-evolving threats.
The forefront of our defenses is a state-of-the-art email classifier that detects abusive
messages with 99.9% accuracy
To protect yourself from unsafe websites, make sure to heed
that alert you of potential phishing and malware attacks.
Use many layers of defense: we recommend using a
security key enforcement
(2-step verification) to thwart attackers from accessing your account in the event of a stolen password.
To ensure your email contents’ stays safe and secure in transit, use our
TLS encryption indicator
, to ensure only the intended recipient can read your email.
We will never stop working to keep our users and their inboxes secure. To learn more about how we protect Gmail, check out this YouTube video that summarizes the lessons we learned while protecting Gmail users through the years.
802.11s Security and Google Wifi
February 7, 2017
Posted by Paul Devitt, Security Engineer
Making sure your home network and information stay secure is our top priority. So when we launched the Google OnHub home router in 2015, we made sure
security was baked into its core
. In 2016 we took all we learned from OnHub and made it even better by adding mesh support with the introduction of
Secure to the core - Always
The primary mechanism to making sure your Wifi points stay safe is our verified boot mechanism. The operating system and code that your OnHub and Google Wifi run are guaranteed to have been signed by Google. Both OnHub and Google Wifi use
Coreboot and Depthcharge
from ChromeOS and ensure system integrity by implementing
from Android. To secure Userspace, we use process isolation with
and a strict set of policies.
On the software side, Google Wifi and OnHub are subject to
expansive fuzz testing
of major components and functions. The continual improvements found by fuzzing are fed into Google Wifi and OnHub, and are made available through the regular automatic updates, secured by Google’s cloud.
802.11s Security for WiFi
In 2016 with the launch of Google Wifi, we introduced
802.11s mesh technology
to the home router space. The result is a system where multiple Wifi Points work together to create blanket coverage. The specification for 802.11s recommends that appropriate security steps be taken, but doesn’t strictly define them for people to use. We spent significant time in building a security model into our implementation of 802.11s that Google WiFi and OnHub could use so that your network is always comprised of exactly the devices you expect.
As each mesh node within the network will need to speak securely to its neighboring nodes, it's imperative that a secure method, which is isolated from the user, is established to form those links. Each Wifi node establishes a separate encrypted channel with its neighbors and the primary node. On any major network topology change (such as a node being factory reset, a node added, or an event where an unexpected node joins the network), the mesh will undergo a complete cycling of the encryption keys. Each node will establish and test a new set of keys with its respective neighbors, verify that it has network connectivity and then the network as a whole will transition to the new keys.
These mesh encryption keys are generated locally on your devices and are never transmitted outside of your local network. In the event that a key has been discovered outside of your local network, a rekeying operation will be triggered. The rekeying operations allow for the mesh network to be fully flexible to the user’s desire and maintain a high level of security for devices communicating across it.
Committed to security
We have an ongoing commitment to the security of Google Wifi and OnHub. Both devices participate in the
Google Vulnerability Rewards Program (VRP)
and eligible bugs can be rewarded up to $20,000 (U.S). We’re always looking to raise the bar to help our users be secure online.
Hosted S/MIME by Google provides enhanced security for Gmail in the enterprise
February 2, 2017
Posted by Nicolas Kardas, Gmail Product Management and Nicolas Lidzborski, G Suite Security Engineering Lead
We are constantly working to meet the needs of our enterprise customers, including enhanced security for their communications. Our aim is to offer a secure method to transport sensitive information despite
insecure channels with email today
and without compromising Gmail extensive protections for spam, phishing and malware.
Why hosted S/MIME?
has been around for many years. However, its adoption has been limited because it is difficult to deploy (end users have to manually install certificates to their email applications) and the underlying email service cannot efficiently protect against spam, malware and phishing because client-side S/MIME makes the email content opaque.
With Google’s new hosted S/MIME solution, once an incoming encrypted email with S/MIME is received, it is stored using
. This means that all normal processing of the email can happen, including extensive protections for spam/phishing/malware, admin services (such as vault retention, auditing and email routing rules), and high value end user features such as mail categorization, advanced search and
. For the vast majority of emails, this is the safest solution - giving the benefit of strong authentication and encryption in transit - without losing the safety and features of Google's processing.
Using hosted S/MIME provides an added layer of security compared to using SMTP over TLS to send emails. TLS only guarantees to the sender’s service that the first hop transmission is encrypted and to the recipient that the last hop was encrypted. But in practice, emails often take many hops (through forwarders, mailing lists, relays, appliances, etc). With hosted S/MIME, the message itself is encrypted. This facilitates secure transit all the way down to the recipient’s mailbox.
S/MIME also adds verifiable account-level signatures authentication (versus only domain-based signature with DKIM). This means that email receivers can ensure that incoming email is actually from the sending account, not just a matching domain, and that the message has not been tampered with after it was sent.
How to use hosted S/MIME?
S/MIME requires every email address to have a suitable certificate attached to it. By default, Gmail requires the certificate to be from a publicly trusted root Certificate Authority (CA) which meets
strong cryptographic standards
. System administrators will have the option to lower these requirements for their domains.
To use hosted S/MIME, companies need to upload their own certificates (with private keys) to Gmail, which can be done by end users via Gmail settings or by admins in bulk via the Gmail API.
From there, using hosted S/MIME is a seamless experience for end users. When receiving a digitally signed message, Gmail automatically associates the public key with the contact of the sender. By default, Gmail automatically signs and encrypts outbound messages if there is a public S/MIME key available for the recipient. Although users have the option to manually remove encryption, admins can set up rules that override their action.
Hosted S/MIME is supported on Gmail web/iOS/Android, on Inbox and on clients connected to the Gmail service via IMAP. Users can exchange signed and encrypted emails with recipients using hosted S/MIME or client-side S/MIME.
Which companies should consider using hosted S/MIME?
Hosted S/MIME provides a solution that is easy to manage for administrators and seamless for end users. Companies that want security in transit and digital signature/non-repudiation at the account level should consider using hosted S/MIME. This is a need for many companies working with sensitive/confidential information.
Hosted S/MIME is available for
G Suite Enterprise edition
Better and more usable protection from phishing
February 1, 2017
Posted by Christiaan Brand and Guemmy Kim, Product Managers, Google Account Security
Despite constant advancements in online safety, phishing — one of the web’s oldest and simplest attacks — remains a tough challenge for the security community. Subtle tricks and good old-fashioned con-games can cause even the most security-conscious users to reveal their passwords or other personal information to fraudsters.
New advancements in phishing protection
This is why we’re excited about the
news for G Suite customers
: the launch of Security Key enforcement. Now, G Suite administrators can better protect their employees by enabling Two-Step Verification (2SV) using
Security Keys as the second factor, making this protection the norm rather than just an option. 2SV with only a Security Key offers the highest level of protection from phishing. Instead of entering a unique code as a second factor at sign-in, Security Keys send us cryptographic proof that users are on a legitimate Google site and that they have their Security Keys with them. Since most hijackers are remote, their efforts are thwarted because they cannot get physical possession of the Security Key.
Users can also take advantage of new
Bluetooth low energy (BLE) Security Key support
, which makes using 2SV Security Key protection easier on mobile devices. BLE Security Keys, which work on both Android and iOS, improve upon the usability of other form factors.
A long history of phishing protections
We’ve helped protect users from phishing for many years. We rolled out 2SV back in 2011, and later strengthened it in 2014 with the
addition of Security Keys
. These launches complement our many layers of phishing protections —
Safe Browsing warnings
Gmail spam filters
account sign-in challenges
— as well as our work with industry groups like the
to develop standards and combat phishing across the industry. In the coming months, we’ll build on these protections and offer users the opportunity to further protect their personal Google Accounts.
Vulnerability Rewards Program: 2016 Year in Review
January 30, 2017
Posted by Eduardo Vela Nava, VRP Technical Lead, Master of Disaster
We created our Vulnerability Rewards Program in 2010 because researchers should be rewarded for protecting our users. Their discoveries help keep our users, and the internet at large, as safe as possible.
The amounts we award vary, but our message to researchers does not; each one represents a sincere ‘thank you’.
As we have for
, we’re again sharing a yearly wrap-up of the Vulnerability Rewards Program.
What was new?
In short — a lot. Here’s a quick rundown:
Previously by-invitation only, we opened up
Chrome's Fuzzer Program
to submissions from the public. The program allows researchers to run
at large scale, across thousands of cores on Google hardware, and receive reward payments automatically.
On the product side, we saw amazing contributions from Android researchers all over the world, less than a year after Android launched its VRP. We also expanded our overall VRP to include more products, including OnHub and Nest devices.
We increased our presence at events around the world, like
. The vulnerabilities responsibly disclosed at these events enabled us to quickly provide fixes to the ecosystem and keep customers safe. At both events, we were able to close down a vulnerability in Chrome within days of being notified of the issue.
Stories that stood out
As always, there was no shortage of inspiring, funny, and quirky anecdotes from the 2016 year in VRP.
We met Jasminder Pal Singh at Nullcon in India. Jasminder is a long-time contributor to the VRP, but this research is a side project for him. He spends most of his time growing
Jasminder Web Services Point
, the startup he operates with six other colleagues and friends. The team consists of: two web developers, one graphic designer, a developer for Android and iOS respectively, one Linux administrator, and a Content Manager/Writer. Jasminder’s VRP rewards fund the startup. The number of reports we receive from researchers in India is growing, and we’re growing the VRP’s presence there with additional conference sponsorships, trainings, and more.
Jasminder (back right) and his team
Jon Sawyer worked with his colleague Sean Beaupre from Streamlined Mobile Solutions, and friend Ben Actis to submit three Android vulnerability reports. A resident of
Clallam County, Washington
, Jon donated their $8,000 reward to their local Special Olympics team, the Orcas. Jon told us the reward was particularly meaningful because his son, Benji, plays on the team. He said:
“Special Olympics provides a sense of community, accomplishment, and free health services at meets. They do incredible things for these people, at no cost for the athletes or their parents. Our donation is going to supply them with new properly fitting uniforms, new equipment, cover some facility rental fees (bowling alley, gym, track, swimming pool) and most importantly help cover the biggest cost, transportation.”
VRP researchers sometimes attach videos that demonstrate the bug. While making a great proof-of-concept video is a skill in itself, our researchers raised it to another level this year. Check out this video Frans Rosén sent us. It’s perfectly synchronized to the background music! We hope this trend continues in 2017 ;-)
Researchers’ individual contributions, and our relationship with the community, have never been more important. A hearty thank you to everyone that contributed to the VRP in 2016 — we’re excited to work with you (and others!) in 2017 and beyond.
*Josh Armour (
VRP Program Manager
), Andrew Whalley (
), and Quan To (
) contributed mightily to help lead these Google-wide efforts.
The foundation of a more secure web
January 26, 2017
Posted by Ryan Hurst, Security and Privacy Engineering
In support of our work to implement HTTPS across all of our products (
) we have been operating our own subordinate Certificate Authority (GIAG2), issued by a third-party. This has been a key element enabling us to more rapidly handle the SSL/TLS certificate needs of Google products.
As we look forward to the evolution of both the web and our own products it is clear HTTPS will continue to be a foundational technology. This is why we have made the decision to expand our current Certificate Authority efforts to include the operation of our own Root Certificate Authority. To this end, we have established Google Trust Services (
/), the entity we will rely on to operate these Certificate Authorities on behalf of Google and Alphabet.
The process of embedding Root Certificates into products and waiting for the associated versions of those products to be broadly deployed can take time. For this reason we have also purchased two existing Root Certificate Authorities, GlobalSign R2 and R4. These Root Certificates will enable us to begin independent certificate issuance sooner rather than later.
We intend to continue the operation of our existing GIAG2 subordinate Certificate Authority. This change will enable us to begin the process of migrating to our new, independent infrastructure.
Google Trust Services now operates the following Root Certificates:
GTS Root R1
RSA 4096, SHA-384
Jun 22, 2036
GTS Root R2
RSA 4096, SHA-384
Jun 22, 2036
GTS Root R3
ECC 384, SHA-384
Jun 22, 2036
GTS Root R4
ECC 384, SHA-384
Jun 22, 2036
GS Root R2
RSA 2048, SHA-1
Dec 15, 2021
GS Root R4
ECC 256, SHA-256
Jan 19, 2038
Due to timing issues involved in establishing an independently trusted Root Certificate Authority, we have also secured the option to cross sign our CAs using:
GS Root R3
RSA 2048, SHA-256
Mar 18, 2029
RSA 2048, SHA-1
May 21, 2022
If you are building products that intend to connect to a Google property moving forward you need to at minimum include the above Root Certificates. With that said even though we now operate our own roots, we may still choose to operate subordinate CAs under third-party operated roots.
For this reason if you are developing code intended to connect to a Google property, we still recommend you include a wide set of trustworthy roots. Google maintains a sample PEM file at (
) which is periodically updated to include the Google Trust Services owned and operated roots as well as other roots that may be necessary now, or in the future to communicate with and use Google Products and Services.
App Security Improvements: Looking back at 2016
January 19, 2017
Posted by Rahul Mishra, Android Security Program Manager
[Cross-posted from the
Android Developers Blog
In April 2016, the Android Security team described how the Google Play App Security Improvement (ASI) program has helped developers
fix security issues in 100,000 applications
. Since then, we have detected and notified developers of 11 new security issues and provided developers with resources and guidance to update their apps. Because of this, over 90,000 developers have updated over 275,000 apps!
ASI now notifies developers of 26 potential security issues. To make this process more transparent, we introduced
a new page
where developers can find information about all these security issues in one place. This page includes links to help center articles containing instructions and additional support contacts. Developers can use this page as a resource to learn about new issues and keep track of all past issues.
Developers can also refer to our
security best practices documents
, which are aimed at improving the understanding of general security concepts and providing examples that can help tackle app-specific issues.
How you can help:
For feedback or questions, please reach out to us through the
Google PlayDeveloper Help Center
Silence speaks louder than words when finding malware
January 17, 2017
Posted by Megan Ruthven, Software Engineer
[Cross-posted from the
Android Developers Blog
In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is
. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.
But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our
security update system
This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps
To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate.
Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.
N = Number of devices that downloaded the app.
x = Number of retained devices that downloaded the app.
p = Probability of a device downloading any app will be retained.
In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.
Difference between a regular and DOI app download on the same device.
Results in the wild
Among others, the DOI score flagged many apps in three well known malware families—
. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out
Give us feedback in our
Official Google Blog
Public Policy Blog
Lat Long Blog
Ads Developer Blog
Android Developers Blog