In the late 19th and early 20th century, a series of catastrophic fires in short succession led an outraged public to demand action from the budding fire protection industry. Among the experts, one initial focus was on “Fire Evacuation Tests”. The earliest of these tests focused on individual performance and tested occupants on their evacuation speed, sometimes performing the tests “by surprise” as though the fire drill were a real fire. These early tests were more likely to result in injuries to the test-takers than any improvement in survivability. It wasn’t until introducing better protective engineering - wider doors, push bars at exits, firebreaks in construction, lighted exit signs, and so on - that survival rates from building fires began to improve. As protections evolved over the years and improvements like mandatory fire sprinklers became required in building code, survival rates have continued to improve steadily, and “tests” have evolved into announced, advanced training and posted evacuation plans.
In this blog, we will analyze the modern practice of Phishing “Tests” as a cybersecurity control as it relates to industry-standard fire protection practices.
Google currently operates under regulations (for example, FedRAMP in the USA) that require us to perform annual “Phishing Tests.” In these mandatory tests, the Security team creates and sends phishing emails to Googlers, counts how many interact with the email, and educates them on how to “not be fooled” by phishing. These exercises typically collect reporting metrics on sent emails and how many employees “failed” by clicking the decoy link. Usually, further education is required for employees who fail the exercise. Per the FedRAMP pen-testing guidance doc: “Users are the last line of defense and should be tested.”
These tests resemble the first “evacuation tests” that building occupants were once subjected to. They require individuals to recognize the danger, react individually in an ‘appropriate’ way, and are told that any failure is an individual failure on their part rather than a systemic issue. Worse, FedRAMP guidance requires companies to bypass or eliminate all systematic controls during the tests to ensure the likelihood of a person clicking on a phishing link is artificially maximized.
Among the harmful side effects of these tests:
There is no evidence that the tests result in fewer incidences of successful phishing campaigns;
Phishing (or more generically social engineering) remains a top vector for attackers establishing footholds at companies.
Research shows that these tests do not effectively prevent people from being fooled. This study with 14,000 participants showed a counterproductive effect of phishing tests, showing that “repeat clickers” will consistently fail tests despite recent interventions.
Some (e.g, FedRAMP) phishing tests require bypassing existing anti-phishing defenses. This creates an inaccurate perception of actual risks, allows penetration testing teams to avoid having to mimic actual modern attacker tactics, and creates a risk that the allowlists put in place to facilitate the test could be accidentally left in place and reused by attackers.
There has been a significantly increased load on Detection and Incident Response (D&R) teams during these tests, as users saturate them with thousands of needless reports.
Employees are upset by them and feel security is “tricking them”, which degrades the trust with our users that is necessary for security teams to make meaningful systemic improvements and when we need employees to take timely actions related to actual security events.
At larger enterprises with multiple independent products, people can end up with numerous overlapping required phishing tests, causing repeated burdens.
Training humans to avoid phishing or social engineering with a 100% success rate is a likely impossible task. There is value in teaching people how to spot phishing and social engineering so they can alert security to perform incident response. By ensuring that even a single user reports attacks in progress, companies can activate full-scope responses which are a worthwhile defensive control that can quickly mitigate even advanced attacks. But, much like the Fire Safety professional world has moved to regular pre-announced evacuation training instead of surprise drills, the information security industry should move toward training that de-emphasizes surprises and tricks and instead prioritizes accurate training of what we want staff to do the moment they spot a phishing email - with a particular focus on recognizing and reporting the phishing threat.
In short - we need to stop doing phishing tests and start doing phishing fire drills.
A “phishing fire drill” would aim to accomplish the following:
Educate our users about how to spot phishing emails
Inform the users on how to report phishing emails
Allow employees to practice reporting a phishing email in the manner that we would prefer, and
Collect useful metrics for auditors, such as:
The number of users who completed the practice exercise of reporting the email as a phishing email
The time between the email opening and the first report of phishing
Time of first escalation to the security team (and time delta)
Number of reports at 1 hour, 4 hours, 8 hours, and 24 hours post-delivery
When performing a phishing drill, someone would send an email announcing itself as a phishing email and with relevant instructions or specific tasks to perform. An example text is provided below.
Hello! I am a Phishing Email.
This is a drill - this is only a drill!
If I were an actual phishing email, I might ask you to log into a malicious site with your actual username or password, or I might ask you to run a suspicious command like <example command>. I might try any number of tricks to get access to your Google Account or workstation.
You can learn more about recognizing phishing emails at <LINK TO RESOURCE> and even test yourself to see how good you are at spotting them. Regardless of the form a phishing email takes, you can quickly report them to the security team when you notice they’re not what they seem.
To complete the annual phishing drill, please report me. To do that, <company-specific instructions on where to report phishing>.
Thanks for doing your part to keep <company> safe!
Tricky. Phish, Ph.D
Phishing and Social Engineering aren’t going away as attack techniques. As long as humans are fallible and social creatures, attackers will have ways to manipulate the human factor. The more effective approach to both risks is a focused pursuit of secure-by-default systems in the long term, and a focus on investment in engineering defenses such as unphishable credentials (like passkeys) and implementing multi-party approval for sensitive security contexts throughout production systems. It’s because of investments in architectural defenses like these that Google hasn’t had to seriously worry about password phishing in nearly a decade.
Educating employees about alerting security teams of attacks in progress remains a valuable and essential addition to a holistic security posture. However, there’s no need to make this adversarial, and we don’t gain anything by “catching” people “failing” at the task. Let's stop engaging in the same old failed protections and follow the lead of more mature industries, such as Fire Protection, which has faced these problems before and already settled on a balanced approach.
Our commitment to user safety is a top priority for Android. We’ve been consistently working to stay ahead of the world’s scammers, fraudsters and bad actors. And as their tactics evolve in sophistication and scale, we continually adapt and enhance our advanced security features and AI-powered protections to help keep Android users safe.
In addition to our new suite of advanced theft protection features to help keep your device and data safe in the case of theft, we’re also focusing increasingly on providing additional protections against mobile financial fraud and scams.
Today, we’re announcing more new fraud and scam protection features coming in Android 15 and Google Play services updates later this year to help better protect users around the world. We’re also sharing new tools and policies to help developers build safer apps and keep their users safe.
Google Play Protect live threat detection
Google Play Protect now scans 200 billion Android apps daily, helping keep more than 3 billion users safe from malware. We are expanding Play Protect’s on-device AI capabilities with Google Play Protect live threat detection to improve fraud and abuse detection against apps that try to cloak their actions.
With live threat detection, Google Play Protect’s on-device AI will analyze additional behavioral signals related to the use of sensitive permissions and interactions with other apps and services. If suspicious behavior is discovered, Google Play Protect can send the app to Google for additional review and then warn users or disable the app if malicious behavior is confirmed. The detection of suspicious behavior is done on device in a privacy preserving way through Private Compute Core, which allows us to protect users without collecting data. Google Pixel, Honor, Lenovo, Nothing, OnePlus, Oppo, Sharp, Transsion, and other manufacturers are deploying live threat detection later this year.
Stronger protections against fraud and scams
We’re also bringing additional protections to fight fraud and scams in Android 15 with two key enhancements to safeguard your information and privacy from bad apps:
We are continuing to develop new, AI-powered protections, like the scam call detection capability that we’re testing, which uses on-device Gemini-Nano AI to warn users in real-time when it detects conversation patterns commonly associated with fraud and scams.
Protecting against screen-sharing social engineering attacksWe’re also tightening controls for screen sharing in Android 15 to limit social engineering attacks that try to view your screen and steal information, while introducing new safeguards to further shield your sensitive information:
Having clear content sharing indicators is important for users to understand when their data is visible. A new, more prominent screen indicator coming to Android devices later this year will always let you know when screen sharing is active, and you can stop sharing with a simple tap.
Advanced cellular security to fight fraud and surveillance
We’re adding new advanced cellular protections in Android 15 to defend against abuse by criminals using cell site simulators to snoop on users or send them SMS-based fraud messages.
These features require device OEM integration and compatible hardware. We are working with the Android ecosystem to bring these features to users. We expect OEM adoption to progress over the next couple of years.
More security tools for developers to fight fraud and scams
Safeguarding apps from scams and fraud is an ongoing battle for developers. The Play Integrity API lets developers check that their apps are unmodified and running on a genuine Android device so that they can detect fraudulent or risky behavior and take actions to prevent attacks and abuse. We’ve updated the API with new in-app signals to help developers secure their apps against new threats:
Developers can decide how their apps respond to these signals, such as prompting the user to close risky apps or turn on Google Play Protect before continuing.
Upgraded policies and tools for developers to enhance user privacy
We’re working to make photo permissions even more private for users. Starting this year, apps on Play must demonstrate that they require broad access to use the photo or video permissions. Google Play will start enforcing this policy in August. We’ve updated photo picker, Android’s preferred solution for granting individual access to photos and videos without requiring broad permissions. Photo picker now includes support for cloud storage services like Google Photos. It’s much easier to find the right photo by browsing albums and favorites. Coming later this year, photo picker will support local and cloud search as well.Always evolving our multi-layered protections
Android's commitment to user safety is unwavering. We're constantly evolving our multi-layered user protections – combining the power of advanced AI with close partnerships across OEMs, the Android ecosystem, and the security research community. Building a truly secure Android experience is a collaborative effort, and we'll continue to work tirelessly to safeguard your device and data.
Google and Apple have worked together to create an industry specification – Detecting Unwanted Location Trackers – for Bluetooth tracking devices that makes it possible to alert users across both Android and iOS if such a device is unknowingly being used to track them. This will help mitigate the misuse of devices designed to help keep track of belongings. Google is now launching this capability on Android 6.0+ devices, and today Apple is implementing this capability in iOS 17.5.
With this new capability, Android users will now get a “Tracker traveling with you” alert on their device if an unknown Bluetooth tracking device is seen moving with them over time, regardless of the platform the device is paired with.
If a user gets such an alert on their Android device, it means that someone else’s AirTag, Find My Device network-compatible tracker tag, or other industry specification-compatible Bluetooth tracker is moving with them. Android users can view the tracker’s identifier, have the tracker play a sound to help locate it, and access instructions to disable it. Bluetooth tag manufacturers including Chipolo, eufy, Jio, Motorola, and Pebblebee have committed that future tags will be compatible.
Google’s Find My Device is secure by default and private by design. Multi-layered user protections, including first of its kind safety-first protections, help mitigate potential risks to user privacy and safety while allowing users to effectively locate and recover lost devices. This cross-platform collaboration — an industry first, involving community and industry input — offers instructions and best practices for manufacturers, should they choose to build unwanted tracking alert capabilities into their products. Google and Apple will continue to work with the Internet Engineering Task Force via the Detecting Unwanted Location Trackers working group to develop the official standard for this technology.
Last year, Google launched passkey support for Google Accounts. Passkeys are a new industry standard that give users an easy, highly secure way to sign-in to apps and websites. Today, we announced that passkeys have been used to authenticate users more than 1 billion times across over 400 million Google Accounts.
As more users encounter passkeys, we’re often asked questions about how they relate to security keys, how Google Workspace administrators can configure passkeys for the user accounts that they manage, and how they relate to the Advanced Protection Program (APP). This post will seek to clarify these topics.
Passkeys and security keys
Passkeys are an evolution of security keys, meaning users get the same security benefits, but with a much simplified experience. Passkeys can be used in the Google Account sign-in process in many of the same ways that security keys have been used in the past — in fact, you can now choose to store your passkey on your security key. This provides users with three key benefits:
Stronger security. Users typically authenticate with passkeys by entering their device’s screen lock PIN, or using a biometric authentication method, like a fingerprint or a face scan. By storing the passkey on a security key, users can ensure that passkeys are only available when the security key is plugged into their device, creating a stronger security posture.
Flexible portability. Today, users rely on password managers to make passkeys available across all of their devices. Security keys provide an alternate way to use your passkeys across your devices: by bringing your security keys with you.
Simpler sign-in. Passkeys can act as a first- and second-factor, simultaneously. By creating a passkey on your security key, you can skip entering your password. This replaces your remotely stored password with the PIN you used to unlock your security key, which improves user security. (If you prefer to continue using your password in addition to using a passkey, you can turn off “Skip password when possible” in your Google Account security settings.)
Passkeys bring strong and phishing-resistant authentication technology to a wider user base, and we’re excited to offer this new way for passkeys to meet more user needs.
Google Workspace admins have additional controls and choice
Google Workspace accounts have a domain level “Allow users to skip passwords at sign-in by using passkeys” setting which is off by default, and overrides the corresponding user-level configuration. This retains the need for a user’s password in addition to presenting a passkey. Admins can also change that setting and allow users to sign-in with just a passkey.
When the domain-level setting is off, end users will still see a “use a security key” button on their “passkeys and security keys” page, which will attempt to enroll any security key for use as a second factor only. This action will not require the user to set up a PIN for their security key during registration. This is designed to give enterprise customers who have deployed legacy security keys additional time to make the change to passkeys, with or without a password.
Passkeys for Advanced Protection Program (APP) users
Since the introduction of passkeys in 2023, users enrolled in APP have been able to add any passkey to their account and use it to sign in. However users are still required to present two security keys when enrolling into the program. We will be updating the enrollment process soon to enable a user with any passkey to enroll in APP. By allowing any passkey to be used (rather than only hardware security keys) we expect to reach more high risk users who need advanced protection, while maintaining phishing-resistant authentication.