Google is continuously advancing the security of Pixel devices. We have been focusing on hardening the cellular baseband modem against exploitation. Recognizing the risks associated within the complex modem firmware, Pixel 9 shipped with mitigations against a range of memory-safety vulnerabilities. For Pixel 10, Google is advancing its proactive security measures further. Following our previous discussion on "Deploying Rust in Existing Firmware Codebases", this post shares a concrete application: integrating a memory-safe Rust DNS(Domain Name System) parser into the modem firmware. The new Rust-based DNS parser significantly reduces our security risk by mitigating an entire class of vulnerabilities in a risky area, while also laying the foundation for broader adoption of memory-safe code in other areas.
Here we share our experience of working on it, and hope it can inspire the use of more memory safe languages in low-level environments.
In recent years, we have seen increasing interest in the cellular modem from attackers and security researchers. For example, Google's Project Zero gained remote code execution on Pixel modems over the Internet. Pixel modem has tens of Megabytes of executable code. Given the complexity and remote attack surface of the modem, other critical memory safety vulnerabilities may remain in the predominantly memory-unsafe firmware code.
The DNS protocol is most commonly known in the context of browsers finding websites. With the evolution of cellular technology, modern cellular communications have migrated to digital data networks; consequently, even basic operations such as call forwarding rely on DNS services.
DNS is a complex protocol and requires parsing of untrusted data, which can lead to vulnerabilities, particularly when implemented in a memory-unsafe language (example: CVE-2024-27227). Implementing the DNS parser in Rust offers value by decreasing the attack surfaces associated with memory unsafety.
DNS already has a level of support in the open-source Rust community. We evaluated multiple open source crates that implement DNS. Based on criteria shared in earlier posts, we identified hickory-proto as the best candidate. It has excellent maintenance, over 75% test coverage, and widespread adoption in the Rust community. Its pervasiveness shows its potential as the de-facto DNS choice and long term support. Although hickory-proto initially lacked no_std support, which is needed for Bare-metal environments (see our previous post on this topic), we were able to add support to it and its dependencies.
no_std
The work to enable no_std for hickory-proto is mostly mechanical. We shared the process in a previous post. We undertook modifications to hickory_proto and its dependencies to enable no_std support. The upstream no_std work also results in a no_std URL parser, beneficial to other projects.
The above PRs are great examples of how to extend no_std support to existing std-only crates.
Code size is the one of the factors that we evaluated when picking the DNS library to use.
We built prototypes and measured size with size-optimized settings. Expectedly, hickory_proto is not designed with embedded use in mind, and is not optimized for size. As the Pixel modem is not tightly memory constrained, we prioritized community support and code quality, leaving code size optimizations as future work.
hickory_proto
However, the additional code size may be a blocker for other embedded systems. This could be addressed in the future by adding additional feature flags to conditionally compile only required functionality. Implementing this modularity would be a valuable future work.
Before building the Rust DNS library, we defined several Rust unit tests to cover basic arithmetic, dynamic allocations, and FFI to verify the integration of Rust with the existing modem firmware code base.
FFI
While using cargo is the default choice for compilation in the Rust ecosystem, it presents challenges when integrating it into existing build systems. We evaluated two options:
cargo
staticlib
rustc
Option #1 does not scale if we are going to add more Rust components in the future, as linking multiple staticlibs may cause duplicated symbol errors. We chose option #2 as it scales more easily and allows tighter integration into our existing build system. Our existing C/C++ codebase uses Pigweed to drive the primary build system. Pigweed supports Rust targets (example) with direct calls to rustc through rust tools defined in GN.
rust tools
GN
We compiled all the Rust crates, including hickory-proto, its dependencies, and core, compiler_builtin, alloc, to rlib. Then, we created a staticlib target with a single lib.rs file which references all the rlib crates using extern crate keywords.
rlib
extern crate
Android’s Rust Toolchain distributes source code of core, alloc, and compiler_builtins, and we leveraged this for the modem. They can be included to the build graph by adding a GN target with crate_root pointing to the root lib.rs of each crate.
core
alloc
compiler_builtins
crate_root
lib.rs
Pixel modem firmware already has a well-tested and specialized global memory allocation system to support some dynamic memory allocations. alloc support was added by implementing the GlobalAlloc with FFI calls to the allocators C APIs:
use core::alloc::{GlobalAlloc, Layout}; extern "C" { fn mem_malloc(size: usize, alignment: usize) -> *mut u8; fn mem_free(ptr: *mut u8, alignment: usize); } struct MemAllocator; unsafe impl GlobalAlloc for MemAllocator { unsafe fn alloc(&self, layout: Layout) -> *mut u8 { mem_malloc(layout.size(), layout.align()) } unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) { mem_free(ptr, layout.align()); } } #[global_allocator] static ALLOCATOR: MemAllocator = MemAllocator;
Pixel modem firmware already implements a backend for the Pigweed crash facade as the global crash handler. Exposing it into Rust panic_handler through FFI unifies the crash handling for both Rust and C/C++ code.
panic_handler
#![no_std] use core::panic::PanicInfo; extern "C" { pub fn PwCrashBackend(sigature: *const i8, file_name: *const i8, line: u32); } #[panic_handler] fn panic(panic_info: &PanicInfo) -> ! { let mut filename = ""; let mut line_number: u32 = 0; if let Some(location) = panic_info.location() { filename = location.file(); line_number = location.line(); } let mut cstr_buffer = [0u8; 128]; // Never writes to the last byte to make sure `cstr_buffer` is always zero // terminated. let (_, writer) = cstr_buffer.split_last_mut().unwrap(); for (place, ch) in writer.iter_mut().zip(filename.bytes()) { *place = ch; } unsafe { PwCrashBackend( "Rust panic\0".as_ptr() as *const i8, cstr_buffer.as_ptr() as *const i8, line_number, ); } loop {} }
The Pixel modem firmware linking has a step that calls the linker to link all the objects generated from C/C++ code. By using llvm-ar -x to extract object files from the Rust combined staticlib and supplying them to the linker, the Rust code appears in the final modem image.
llvm-ar -x
There was a performance issue we experienced due to weak symbols during linking. The inclusion of Rust core and compiler-builtin caused unexpected power and performance regressions on various tests. Upon analysis, we realized that modem optimized implementations of memset and memcpy provided by the modem firmware are accidentally replaced by those defined in compiler_builtin. It seems to happen because both compiler_builtin crate and the existing codebase defines symbols as weak, linker has no way to figure out which one is weaker. We fixed the regression by stripping the compiler_builtin crate before linking using a one line shell script.
compiler-builtin
memset
memcpy
compiler_builtin
llvm-ar -t <rust staticlib> | grep compiler_builtins | xargs llvm-ar -d <rust staticlib>
For the DNS parser, we declared the DNS response parsing API in C and then implemented the same API in Rust.
int32_t process_dns_response(uint8_t*, int32_t);
The Rust function returns an integer standing for the error code. The received DNS answers in the DNS response are required to be updated to in-memory data structures that are coupled with the original C implementation, therefore, we use existing C functions to do it. The existing C functions are dispatched from the Rust implementation.
pub unsafe extern "C" fn process_dns_response( dns_response: *const u8, response_len: i32, ) -> i32 { //... validate inputs `dns_response` and `response_len`. // SAFETY: // It is safe because `dns_response` is null checked above. `response_len` // is passed in, safe as long as it is set correctly by vendor code. match process_response(unsafe { slice::from_raw_parts(dns_response, response_len) }) { Ok(()) => 0, Err(err) => err.into(), } } fn process_response(response: &[u8]) -> Result<()> { let response = hickory_proto::op::Message::from_bytes(response)?; let response = hickory_proto::xfer::DnsResponse::from_message(response)?; for answer in response.answers() { match answer.record_type() { hickory_proto::RecordType:... => { // SAFETY: // It is safe because the callback function does not store // reference of the inputs or their members. unsafe { callback_to_c_function(...)?; } } // ... more match arms omitted. } } Ok(()) }
In our case, the DNS responding parsing function API is simple enough for us to hand write, while the callbacks back to C functions for handling the response have complex data type conversions. Therefore, we leveraged bindgen to generate FFI code for the callbacks.
Even with all features disabled, hickory-proto introduces more than 30 dependent crates. Manually written build rules are difficult to ensure correctness and scale poorly when upgrading dependencies into new versions.
Fuchsia has developed cargo-gnaw to support building their third party Rust crates. Cargo-gnaw works by invoking cargo metadata to resolve dependencies, then parse and generate GN build rules. This ensures correctness and ease of maintenance.
cargo-gnaw
Cargo-gnaw
cargo metadata
The Pixel 10 series of phones marks a pivotal moment, being the first Pixel device to integrate a memory-safe language into its modem.
While replacing one piece of risky attack surface is itself valuable, this project lays the foundation for future integration of memory-safe parsers and code into the cellular baseband, ensuring the baseband’s security posture will continue to improve as development continues.
Following our April 2024 announcement, Device Bound Session Credentials (DBSC) is now entering public availability for Windows users on Chrome 146, and expanding to macOS in an upcoming Chrome release. This project represents a significant step forward in our ongoing efforts to combat session theft, which remains a prevalent threat in the modern security landscape.
Session theft typically occurs when a user inadvertently downloads malware onto their device. Once active, the malware can silently extract existing session cookies from the browser or wait for the user to log in to new accounts, before exfiltrating these tokens to an attacker-controlled server. Infostealer malware families, such as LummaC2, have become increasingly sophisticated at harvesting these credentials. Because cookies often have extended lifetimes, attackers can use them to gain unauthorized access to a user’s accounts without ever needing their passwords; this access is then often bundled, traded, or sold among threat actors.
Crucially, once sophisticated malware has gained access to a machine, it can read the local files and memory where browsers store authentication cookies. As a result, there is no reliable way to prevent cookie exfiltration using software alone on any operating system. Historically, mitigating session theft relied on detecting the stolen credentials after the fact using a complex set of abuse heuristics – a reactive approach that persistent attackers could often circumvent. DBSC fundamentally changes the web's capability to defend against this threat by shifting the paradigm from reactive detection to proactive prevention, ensuring that successfully exfiltrated cookies cannot be used to access users’ accounts.
DBSC protects against session theft by cryptographically binding authentication sessions to a specific device. It does this using hardware-backed security modules, such as the Trusted Platform Module (TPM) on Windows and the Secure Enclave on macOS, to generate a unique public/private key pair that cannot be exported from the machine. The issuance of new short-lived session cookies is contingent upon Chrome proving possession of the corresponding private key to the server. Because attackers cannot steal this key, any exfiltrated cookies quickly expire and become useless to those attackers. This design allows large and small websites to upgrade to secure, hardware-bound sessions by adding dedicated registration and refresh endpoints to their backends, while maintaining complete compatibility with their existing front-end. The browser handles the complex cryptography and cookie rotation in the background, allowing the web app to continue using standard cookies for access just as it always has.
Google rolled out an early version of this protocol over the last year. For sessions protected by DBSC, we have observed a significant reduction in session theft since its launch.
An overview of the DBSC protocol showing the interaction between the browser and server.
A core tenet of the DBSC architecture is the preservation of user privacy. Each session is backed by a distinct key, preventing websites from using these credentials to correlate a user's activity across different sessions or sites on the same device. Furthermore, the protocol is designed to be lean: it does not leak device identifiers or attestation data to the server beyond the per-session public key required to certify proof of possession. This minimal information exchange ensures DBSC helps secure sessions without enabling cross-site tracking or acting as a device fingerprinting mechanism.
DBSC was designed from the beginning to be an open web standard through the W3C process and adoption by the Web Application Security Working Group. Through this process we partnered with Microsoft to design the standard to ensure it works for the web and got input from many in the industry that are responsible for web security.
Additionally, over the past year, we have also conducted two Origin Trials to ensure DBSC effectively serves the requirements of the broader web community. Many web platforms, including Okta, actively participated in these trials and their own testing and provided essential feedback to ensure the protocol effectively addresses their diverse needs.
If you are a web developer and are looking for a way to secure your users against session theft, refer to our developer guide for implementation details. Additionally, all the details about DBSC can be found on the spec and the corresponding github. Feel free to use the issues page to report bugs or provide feature requests.
As we continue to evolve the DBSC standard, future iterations will focus on increasing support across diverse ecosystems and introducing advanced capabilities tailored for complex enterprise environments. Key areas of ongoing development include:
Indirect prompt injection (IPI) is an evolving threat vector targeting users of complex AI applications with multiple data sources, such as Workspace with Gemini. This technique enables the attacker to influence the behavior of an LLM by injecting malicious instructions into the data or tools used by the LLM as it completes the user’s query. This may even be possible without any input directly from the user.
IPI is not the kind of technical problem you “solve” and move on. Sophisticated LLMs with increasing use of agentic automation combined with a wide range of content create an ultra-dynamic and evolving playground for adversarial attacks. That’s why Google takes a sophisticated and comprehensive approach to these attacks. We’re continuously improving LLM resistance to IPI attacks and launching AI application capabilities with ever-improving defenses. Staying ahead of the latest indirect prompt injection attacks is critical to our mission of securing Workspace with Gemini.
In our previous blog “Mitigating prompt injection attacks with a layered defense strategy”, we reviewed the layered architecture of our IPI defenses. In this blog, we’ll share more detail on the continuous approach we take to improve these defenses and to solve for new attacks.
By proactively discovering and cataloging new attack vectors through internal and external programs, we can identify vulnerabilities and deploy robust defenses ahead of adversarial activity.
Human Red-Teaming uses adversarial simulations to uncover security and safety vulnerabilities. Specialized teams execute attacks based on realistic user profiles to exploit weaknesses, coordinating with product teams to resolve identified issues.
Automated Red-Teaming is done via dynamic, machine-learning-driven frameworks to stress-test environments. By algorithmically generating and iterating on attack payloads, we can mimic the behavior of sophisticated threats at scale. This allows us to map complex attack paths and validate the effectiveness of our security controls across a much wider range of edge cases than manual testing could achieve on its own.
The Google AI Vulnerability Rewards Program (VRP) is a critical tool for enabling collaboration between Google and external security researchers who discover new attacks leveraging IPI. Through this VRP, we recognize and reward contributors for their research. We also host regular, live hacking events where we provide invited researchers access to pre-release features, proactively uncovering novel vulnerabilities. These partnerships enable Google to quickly validate, reproduce, and resolve externally-discovered issues.
Google utilizes open-source intelligence feeds to stay on top of the latest publicly disclosed IPI attacks, across social media, press releases, blogs, and more. From there, new AI vulnerabilities are sourced, reproduced, and catalogued internally to ensure our products are not impacted.
All newly discovered vulnerabilities go through a comprehensive analysis process performed by the Google Trust, Security, & Safety teams. Each new vulnerability is reproduced, checked for duplications, mapped into attack technique / impact category, and assigned to relevant owners. The combination of new attack discovery sources and vulnerability catalog process helps Google stay on top of the latest attacks in an actionable manner.
After we discover, curate, and catalog new attacks, we use Simula to generate synthetic data expanding these new attacks. This process is essential because it allows the team to develop attack variants for completeness and coverage, and to prepare new training and validation data sets. This accelerated workflow has boosted synthetic data generation by 75%, supporting large-scale defense model evaluation and retraining, as well as updating the data set used for calculating and reporting on defense effectiveness.
Continually updating and enhancing our defense mechanisms allows us to address a broader range of attack techniques, effectively reducing the overall attack surface. Updating each defense type requires different tasks, from config updates, to prompt engineering and ML model retraining.
Deterministic defenses, including user confirmation, URL sanitization, and tool chaining policies, are designed for rapid response against new or emerging prompt injection attacks by relying on simple configuration updates. These defenses are governed by a centralized Policy Engine, with configurations for policies like baseline tool calls, URL sanitization, and tool chaining. For immediate threats, this configuration-based system facilitates a streamlined process for "point fixes," such as regex takedowns, providing an agile defense layer that acts faster than traditional ML/LLM model refresh cycles.
After generating synthetic data that expands new attacks into variants, the next step is to retrain our ML-based defenses to mitigate these new attacks. We partition the synthetic data described above into separate training and validation sets to ensure performance is evaluated against held-out examples. This approach ensures repeatability, data consistency for fixed training/testing, and establishes a scalable architecture to support future extensions towards fully automated model refresh.
Using the new synthetic data examples, our LLM-based defenses go through prompt engineering with refined system instructions. The goal is to iteratively optimize these prompts against agreed-upon defense effectiveness metrics, ensuring the models remain resilient against evolving threat vectors.
Beyond system-level guardrails and application-level defenses, we prioritize ‘model hardening’, a process that improves the Gemini model's internal capability to identify and ignore harmful instructions within data. By utilizing synthetic datasets and fresh attack patterns, we can model various threat iterations. This enables us to strengthen the Gemini model's ability to disregard harmful embedded commands while following the user's intended request. Through this process of model hardening, Gemini has become significantly more adept at detecting and disregarding injected instructions. This has led to a reduction in the success rate of attacks without compromising the model's efficiency during routine operations.
To measure the real-world impact of defense improvements, we simulate attacks against many Workspace features. This process leverages the newly generated synthetic attack data described on this blog, to create a robust, end-to-end evaluation. The simulation is run against multiple Workspace apps, such as Gmail and Docs, using a standardized set of assets to ensure reliable results. To determine the exact impact of a defense improvement (e.g., an updated ML model or a new LLM prompt optimization), the end-to-end evaluation is run with and without the defense enabled. This comparative testing provides the essential "before and after" metrics needed to validate defense efficacy and drive continuous improvement.
Our commitment to AI security is rooted in the principle that every day you’re safer with Google. While the threat landscape of indirect prompt injection evolves, we are building Workspace with Gemini to be a secure and trustworthy platform for AI-first work. IPI is a complex security challenge, which requires a defense-in-depth strategy and continuous mitigation approach. To get there, we’re combining world-class security research, automated pipelines, and advanced ML/LLM-based models. This robust and iterative framework helps to ensure we not only stay ahead of evolving threats but also provide a powerful, secure experience for both our users and customers.
2025 marked a special year in the history of vulnerability rewards and bug bounty programs at Google: our 15th anniversary 🎉🎉🎉! Originally started in 2010, our vulnerability reward program (VRP) has seen constant additions and expansions over the past decade and a half, clearly indicating the value the programs under this umbrella contribute to the safety and security of Google and its users, but also highlighting their acceptance by the external research community, without which such programs cannot function.
Coming back to 2025 specifically, our VRP once again confirmed the ongoing value of engaging with the external security research community to make Google and its products safer. This was more evident than ever as we awarded over $17 million (an all-time high and more than 40% increase compared to 2024!) to over 700 researchers based in countries around the globe – across all of our programs.
Vulnerability Reward Program 2025 in Numbers
Want to learn more about who’s reporting to the VRP? Check out our Leaderboard on the Google Bug Hunters site.
VRP Highlights in 2025
In 2025 we made a series of changes and improvements to our VRP and related initiatives, and continued to invest in the security research community through a series of focused events:
The new, dedicated AI VRP was launched, underscoring the importance of this space to Google and its relevance for external researchers. Previously organized as a part of the Abuse VRP, moving into a dedicated VRP has gone hand in hand with improvements to the rules, offering researchers more clarity on scope and reward amounts.
Similarly, the Chrome VRP now also includes reward categories for problems found in AI features.
We launched a patch rewards program for OSV-SCALIBR, Google's open source tool for finding vulnerabilities in software dependencies. Contributors are rewarded for providing novel OSV-SCALIBR plugins for inventory, vulnerability, or secret detection that expand the tool’s scanning capabilities. Besides strengthening the tool’s capabilities for all users, user submissions already helped us uncover and remediate a number of leaked secrets internally!
As part of Google's Cybersecurity Awareness Month campaign in October, we hosted our very own security conference in Mexico City, ESCAL8. The conference included init.g(mexico), our cybersecurity workshop for students, HACKCELER8, Google’s CTF finals, and a Safer with Google seminar, sharing technical thought leadership with Mexican government officials.
bugSWAT, our special invite-only live hacking event, saw several editions in 2025 and delivered some outstanding findings across different areas:
We hosted our first dedicated AI bugSWAT (Tokyo) in April which yielded a whopping 70+ reports filed and over $400,000 in rewards issued.
We continued the momentum in early summer with Cloud bugSWAT (Sunnyvale) in June resulting in 130 reports, with $1,600,000 in rewards paid out.
Next in line was bugSWAT Las Vegas in August, leading to 77 reports and rewards of $380,000.
And finally, as part of ESCAL8 in Mexico City, bugSWAT Mexico focused on many different targets and spaces including AI, Android, and Cloud, and resulted in the filing of 107 reports, totalling $566,000 in rewards to date.
Looking for more details? See the extended version of this post on the Security Engineering blog for reports from individual VRPs such as Android, Abuse, AI, Cloud, Chrome, and OSS, including specifics concerning high-impact bug reports and focus areas of security research.
In 2026, we remain fully committed to fostering collaboration, innovation, and transparency with the security community by hosting several bugSWAT events throughout the year, and following up with the next edition of our cybersecurity conference, ESCAL8. More broadly, our goal remains to stay ahead of emerging threats, adapt to evolving technologies, and continue to strengthen the security posture of Google’s products and services – all of which is only possible in collaboration with the external community of researchers we are so lucky to collaborate with!
In this spirit, we’d like to extend a huge thank you to our bug hunter community for helping us make Google products and platforms more safe and secure for our users around the world – and invite researchers not yet engaged with the Vulnerability Reward Program to join us in our mission to keep Google safe (check out our programs for inspiration 🙂)!
Thank you to Tony Mendez, Dirk Göhmann, Alissa Scherchen, Krzysztof Kotowicz, Martin Straka, Michael Cote, Sam Erb, Jason Parsons, Alex Gough, and Mihai Maruseac.
Tip: Want to be informed of new developments and events around our Vulnerability Reward Program? Follow the Google VRP channel on X to stay in the loop and be sure to check out the Security Engineering blog, which covers topics ranging from VRP updates to security practices and vulnerability descriptions!
Modern digital security is at a turning point. We are on the threshold of using quantum computers to solve "impossible" problems in drug discovery, materials science, and energy—tasks that even the most powerful classical supercomputers cannot handle. However, the same unique ability to consider different options simultaneously also allows these machines to bypass our current digital locks. This puts the public-key cryptography we’ve relied on for decades at risk, potentially compromising everything from bank transfers to trade secrets. To secure our future, it is vital to adopt the new Post-Quantum Cryptography (PQC) standards National Institute of Standards and Technology (NIST) is urging before large-scale, fault-tolerant quantum computers become a reality.
To stay ahead of the curve, the technology industry must undertake a proactive, multi-year migration to Post-Quantum Cryptography (PQC). We have been preparing for a post-quantum world since 2016, conducting pioneering experiments with post-quantum cryptography, rolling out post-quantum capabilities in our products, and sharing our expertise through threat models and technical papers. For Android, the objective extends beyond patching individual applications or transport protocols. The imperative is to ensure that the entire platform architecture is resilient for the decades to come.
We are beginning tests of PQC enhancements starting in the next Android 17 beta, followed by general availability in the Android 17 production release. This deployment introduces a comprehensive architectural upgrade that is being rolled out across the operating system. By integrating the recently finalized NIST PQC standards deep into the platform, we’re establishing a new, quantum-resistant chain of trust. This chain of trust secures the platform continuously—from the moment the OS powers on, to the execution of applications distributed globally. Android is swapping today’s digital locks for advanced encryption to help enhance the security of every app you download—no matter how powerful future supercomputers get.
Security on any computing device begins when the hardware starts; if the underlying operating system is compromised, all subsequent software protections fail. As quantum computing advances, adversaries could potentially forge digital signatures to bypass these foundational integrity checks. To secure the platform against this looming threat, Android 17 introduces two major post-quantum cryptographic (PQC) upgrades:
Protecting the underlying operating system is only the first layer of defense; developers must be equipped with the cryptographic primitives necessary to leverage PQC keys and establish robust identity verification.
Implementing lattice-based cryptography, which requires significantly larger key sizes and memory footprints than classical elliptic curve cryptography, within the severely resource-constrained Trusted Execution Environment (TEE), represents a major engineering achievement. This capability is designed to support the hardware roots of trust and can now generate and verify post-quantum signatures.
Building on this hardware foundation, Android 17 updates Android Keystore to natively support ML-DSA. This allows applications to leverage quantum-safe signatures entirely within the device’s secure hardware, isolating sensitive key material from the main operating system. The SDK exposes both ML-DSA-65, and ML-DSA-87, enabling developers to seamlessly integrate these using the standard KeyPairGenerator API. This establishes a new era of identity and authentication for the app ecosystem without requiring developers to engineer proprietary cryptographic implementations.
Android is committed to ensuring the platform is PQC resistant and extending the chain of PQC resistance to application signatures. The mechanisms used to verify the authenticity of applications are being upgraded to ensure that app installations and subsequent updates are strictly tamper-proof against quantum-enabled signature forgery. The platform will verify PQC signatures over APKs to enable this chain of trust.
To bring these critical protections to the wider developer community with minimal friction, the transition will be supported through Play App Signing. This approach provides an immediate bridge to quantum safety for the majority of active installs. Google Play will let developers automatically generate 'hybrid' signature blocks that combine classical and PQC keys.
Updating keys across billions of active devices is a complex operational endeavor. Play App Signing leverages Google Cloud KMS, which helps ensure industry-leading compliance standards, to secure signing keys. By managing signing keys securely in the cloud, Google Play enables developers to seamlessly upgrade their app security to PQC standards without the burden of complex, manual key management.
During the Android 17 release cycle, Google Play will handle the generation of quantum-safe ML-DSA signing keys for new apps and existing apps that opt-in, independent of the applications target API . Later, developers will be able to choose their own classical and ML-DSA signing keys and delegate them to Google Play for their hybrid key upgrade. To promote security best practices, Google Play will also start prompting developers to upgrade their signing keys at least every two years.
Google’s post-quantum transition began in 2016, and Android 17 marks the first phase of Android’s post-quantum transition:
Our roadmap further integrates post-quantum key encapsulation into KeyMint, Key Attestation and Remote Key Provisioning. This evolution is intended to bolster the security of the entire identity lifecycle—from hardware-level DICE measurements to our remote attestation servers—ensuring the Android ecosystem remains resilient and private against the quantum threats of tomorrow.
Today we're announcing a new program in Chrome to make HTTPS certificates secure against quantum computers. The Internet Engineering Task Force (IETF) recently created a working group, PKI, Logs, And Tree Signatures (“PLANTS”), aiming to address the performance and bandwidth challenges that the increased size of quantum-resistant cryptography introduces into TLS connections requiring Certificate Transparency (CT). We recently shared our call to action to secure quantum computing and have written about challenges introduced by quantum-resistant cryptography and some of the steps we’ve taken to address them in earlier blog posts.
To ensure the scalability and efficiency of the ecosystem, Chrome has no immediate plan to add traditional X.509 certificates containing post-quantum cryptography to the Chrome Root Store. Instead, Chrome, in collaboration with other partners, is developing an evolution of HTTPS certificates based on Merkle Tree Certificates (MTCs), currently in development in the PLANTS working group. MTCs replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs. In this model, a Certification Authority (CA) signs a single "Tree Head" representing potentially millions of certificates, and the "certificate" sent to the browser is merely a lightweight proof of inclusion in that tree.
MTCs enable the adoption of robust post-quantum algorithms without incurring the massive bandwidth penalty of classical X.509 certificate chains. They also decouple the security strength of the corresponding cryptographic algorithm from the size of the data transmitted to the user. By shrinking the authentication data in a TLS handshake to the absolute minimum, MTCs aim to keep the post-quantum web as fast and seamless as today’s internet, maintaining high performance even as we adopt stronger security. Finally, with MTCs, transparency is a fundamental property of issuance: it is impossible to issue a certificate without including it in a public tree. This means the security properties of today’s CT ecosystem are included by default, and without adding extra overhead to the TLS handshake as CT does today.
Chrome is already experimenting with MTCs with real internet traffic, and we intend to gradually build out our deployment such that MTCs provide a robust quantum-resistant HTTPS available for use throughout the internet.
Broadly speaking, our rollout spans three distinct phases.
This area is evolving rapidly. As these phases progress, we will continue our active participation in standards bodies such as the IETF and C2SP, ensuring that insights gathered from our efforts flow back towards standards, and that changes in standards are supported by Chrome and the CQRS.
We view the adoption of MTCs and a quantum-resistant root store as a critical opportunity to ensure the robustness of the foundation of today’s ecosystem. By designing for the specific demands of a modern, agile, internet, we can accelerate the adoption of post-quantum resilience for all web users.
We expect this modern foundation for TLS to evolve beyond current ecosystem norms and emphasize themes of security, simplicity, predictability, transparency and resilience. These properties might be expressed by:
To secure the future of the web, we are dedicating our operational resources to two vital parallel tracks. First, we remain fully committed to supporting our current CA partners in the Chrome Root Store, facilitating root rotations to ensure existing non-quantum-resistant hierarchies remain robust and conformant with the Chrome Root Program Policy. Simultaneously, we are focused on building a secure future by developing and launching the infrastructure required to support MTCs and their default use in Chrome. We also expect to support “traditional” X.509 certificates with quantum-resistant algorithms for use only in private PKIs (i.e., those not included in the Chrome Root Store) later this year.
As we execute and refine our work on MTCs, we look forward to sharing a concrete policy framework for a quantum-resistant root store with the community, and are excited to learn and define clear pathways for organizations to operate as Chrome-trusted MTC CAs.