To address this gap and help the ecosystem with deploying robust defenses, Google has supported academic research and developed test platforms to analyze DDR5 memory. Our effort has led to the discovery of new attacks and a deeper understanding of Rowhammer on the current DRAM modules, helping to forge the way for further, stronger mitigations.
Rowhammer exploits a vulnerability in DRAM. DRAM cells store data as electrical charges, but these electric charges leak over time, causing data corruption. To prevent data loss, the memory controller periodically refreshes the cells. However, if a cell discharges before the refresh cycle, its stored bit may corrupt. Initially considered a reliability issue, it has been leveraged by security researchers to demonstrate privilege escalation attacks. By repeatedly accessing a memory row, an attacker can cause bit flips in neighboring rows. An adversary can exploit Rowhammer via:
Reliably cause bit flips by repeatedly accessing adjacent DRAM rows.
Coerce other applications or the OS into using these vulnerable memory pages.
Target security-sensitive code or data to achieve privilege escalation.
Or simply corrupt system’s memory to cause denial of service.
Previous work has repeatedly demonstrated the possibility of such attacks from software [Revisiting rowhammer, Are we susceptible to rowhammer?, Drammer, Flip feng shui, Jolt]. As a result, defending against Rowhammer is required for secure isolation in multi-tenant environments like the cloud.
The primary approach to mitigate Rowhammer is to detect which memory rows are being aggressively accessed and refreshing nearby rows before a bit flip occurs. TRR is a common example, which uses a number of counters to track accesses to a small number of rows adjacent to a potential victim row. If the access count for these aggressor rows reaches a certain threshold, the system issues a refresh to the victim row. TRR can be incorporated within the DRAM or in the host CPU.
However, this mitigation is not foolproof. For example, the TRRespass attack showed that by simultaneously hammering multiple, non-adjacent rows, TRR can be bypassed. Over the past couple of years, more sophisticated attacks [Half-Double, Blacksmith] have emerged, introducing more efficient attack patterns.
In response, one of our efforts was to collaborate with JEDEC, external researchers, and experts to define the PRAC as a new mitigation that deterministically detects Rowhammer by tracking all memory rows.
However, current systems equipped with DDR5 lack support for PRAC or other robust mitigations. As a result, they rely on probabilistic approaches such as ECC and enhanced TRR to reduce the risk. While these measures have mitigated older attacks, their overall effectiveness against new techniques was not fully understood until our recent findings.
Mitigating Rowhammer attacks involves making it difficult for an attacker to reliably cause bit flips from software. Therefore, for an effective mitigation, we have to understand how a determined adversary introduces memory accesses that bypass existing mitigations. Three key information components can help with such an analysis:
How the improved TRR and in-DRAM ECC work.
How memory access patterns from software translate into low-level DDR commands.
(Optionally) How any mitigations (e.g., ECC or TRR) in the host processor work.
The first step is particularly challenging and involves reverse-engineering the proprietary in-DRAM TRR mechanism, which varies significantly between different manufacturers and device models. This process requires the ability to issue precise DDR commands to DRAM and analyze its responses, which is difficult on an off-the-shelf system. Therefore, specialized test platforms are essential.
The second and third steps involve analyzing the DDR traffic between the host processor and the DRAM. This can be done using an off-the-shelf interposer, a tool that sits between the processor and DRAM. A crucial part of this analysis is understanding how a live system translates software-level memory accesses into the DDR protocol.
The third step, which involves analyzing host-side mitigations, is sometimes optional. For example, host-side ECC (Error Correcting Code) is enabled by default on servers, while host-side TRR has only been implemented in some CPUs.
For the first challenge, we partnered with Antmicro to develop two specialized, open-source FPGA-based Rowhammer test platforms. These platforms allow us to conduct in-depth testing on different types of DDR5 modules.
DDR5 RDIMM Platform: A new DDR5 Tester board to meet the hardware requirements of Registered DIMM (RDIMM) memory, common in server computers.
SO-DIMM Platform: A version that supports the standard SO-DIMM pinout compatible with off-the-shelf DDR5 SO-DIMM memory sticks, common in workstations and end-user devices.
Antmicro designed and manufactured these open-source platforms and we worked closely with them, and researchers from ETH Zurich, to test the applicability of these platforms for analyzing off-the-shelf memory modules in RDIMM and SO-DIMM forms.
Antmicro DDR5 RDIMM FPGA test platform in action.
In collaboration with researchers from ETH, we applied the new Rowhammer test platforms to evaluate the effectiveness of current in-DRAM DDR5 mitigations. Our findings, detailed in the recently co-authored "Phoenix” research paper, reveal that we successfully developed custom attack patterns capable of bypassing enhanced TRR (Target Row Refresh) defense on DDR5 memory. We were able to create a novel self-correcting refresh synchronization attack technique, which allowed us to perform the first-ever Rowhammer privilege escalation exploit on a standard, production-grade desktop system equipped with DDR5 memory. While this experiment was conducted on an off-the-shelf workstation equipped with recent AMD Zen processors and SK Hynix DDR5 memory, we continue to investigate the applicability of our findings to other hardware configurations.
We showed that current mitigations for Rowhammer attacks are not sufficient, and the issue remains a widespread problem across the industry. They do make it more difficult “but not impossible” to carry out attacks, since an attacker needs an in-depth understanding of the specific memory subsystem architecture they wish to target.
Current mitigations based on TRR and ECC rely on probabilistic countermeasures that have insufficient entropy. Once an analyst understands how TRR operates, they can craft specific memory access patterns to bypass it. Furthermore, current ECC schemes were not designed as a security measure and are therefore incapable of reliably detecting errors.
Memory encryption is an alternative countermeasure for Rowhammer. However, our current assessment is that without cryptographic integrity, it offers no valuable defense against Rowhammer. More research is needed to develop viable, practical encryption and integrity solutions.
Google has been a leader in JEDEC standardization efforts, for instance with PRAC, a fully approved standard to be supported in upcoming versions of DDR5/LPDDR6. It works by accurately counting the number of times a DRAM wordline is activated and alerts the system if an excessive number of activations is detected. This close coordination between the DRAM and the system gives PRAC a reliable way to address Rowhammer.
In the meantime, we continue to evaluate and improve other countermeasures to ensure our workloads are resilient against Rowhammer. We collaborate with our academic and industry partners to improve analysis techniques and test platforms, and to share our findings with the broader ecosystem.
“Phoenix: Rowhammer Attacks on DDR5 with Self-Correcting Synchronization” will be presented at IEEE Security & Privacy 2026 in San Francisco, CA (MAY 18-21, 2026).
At Made by Google 2025, we announced that the new Google Pixel 10 phones will support C2PA Content Credentials in Pixel Camera and Google Photos. This announcement represents a series of steps towards greater digital media transparency:
These capabilities are powered by Google Tensor G5, Titan M2 security chip, the advanced hardware-backed security features of the Android platform, and Pixel engineering expertise.
In this post, we’ll break down our architectural blueprint for bringing a new level of trust to digital media, and how developers can apply this model to their own apps on Android.
Generative AI can help us all to be more creative, productive, and innovative. But it can be hard to tell the difference between content that’s been AI-generated, and content created without AI. The ability to verify the source and history—or provenance—of digital content is more important than ever.
Content Credentials convey a rich set of information about how media such as images, videos, or audio files were made, protected by the same digital signature technology that has secured online transactions and mobile apps for decades. It empowers users to identify AI-generated (or altered) content, helping to foster transparency and trust in generative AI. It can be complemented by watermarking technologies such as SynthID.
Content Credentials are an industry standard backed by a broad coalition of leading companies for securely conveying the origin and history of media files. The standard is developed by the Coalition for Content Provenance and Authenticity (C2PA), of which Google is a steering committee member.
The traditional approach to classifying digital image content has focused on categorizing content as “AI” vs. “not AI”. This has been the basis for many legislative efforts, which have required the labeling of synthetic media. This traditional approach has drawbacks, as described in Chapter 5 of this seminal report by Google. Research shows that if only synthetic content is labeled as “AI”, then users falsely believe unlabeled content is “not AI”, a phenomenon called “the implied truth effect”. This is why Google is taking a different approach to applying C2PA Content Credentials.
Instead of categorizing digital content into a simplistic “AI” vs. “not AI”, Pixel 10 takes the first steps toward implementing our vision of categorizing digital content as either i) media that comes with verifiable proof of how it was made or ii) media that doesn't.
Given the broad range of scenarios in which Content Credentials are attached by these apps, we designed our C2PA implementation architecture from the onset to be:
Good actors in the C2PA ecosystem are motivated to ensure that provenance data is trustworthy. C2PA Certification Authorities (CAs), such as Google, are incentivized to only issue certificates to genuine instances of apps from trusted developers in order to prevent bad actors from undermining the system. Similarly, app developers want to protect their C2PA claim signing keys from unauthorized use. And of course, users want assurance that the media files they rely on come from where they claim. For these reasons, the C2PA defined the Conformance Program.
The Pixel Camera application on the Pixel 10 lineup has achieved Assurance Level 2, the highest security rating currently defined by the C2PA Conformance Program. This was made possible by a strong set of hardware-backed technologies, including Tensor G5 and the certified Titan M2 security chip, along with Android’s hardware-backed security APIs. Only mobile apps running on devices that have the necessary silicon features and Android APIs can be designed to achieve this assurance level. We are working with C2PA to help define future assurance levels that will push protections even deeper into hardware.
Achieving Assurance Level 2 requires verifiable, difficult-to-forge evidence. Google has built an end-to-end system on Pixel 10 devices that verifies several key attributes. However, the security of any claim is fundamentally dependent on the integrity of the application and the OS, an integrity that relies on both being kept current with the latest security patches.
The C2PA Conformance Program requires verifiable artifacts backed by a hardware Root of Trust, which Android provides through features like Key Attestation. This means Android developers can leverage these same tools to build apps that meet this standard for their users.
The robust security stack we described is the foundation of privacy. But Google takes steps further to ensure your privacy even as you use Content Credentials, which required solving two additional challenges:
Challenge 1: Server-side Processing of Certificate Requests. Google’s C2PA Certification Authorities must certify new cryptographic keys generated on-device. To prevent fraud, these certificate enrollment requests need to be authenticated. A more common approach would require user accounts for authentication, but this would create a server-side record linking a user's identity to their C2PA certificates—a privacy trade-off we were unwilling to make.
Our Solution: Anonymous, Hardware-Backed Attestation. We solve this with Android Key Attestation, which allows Google CAs to verify what is being used (a genuine app on a secure device) without ever knowing who is using it (the user). Our CAs also enforce a strict no-logging policy for information like IP addresses that could tie a certificate back to a user.
Challenge 2: The Risk of Traceability Through Key Reuse. A significant privacy risk in any provenance system is traceability. If the same device or app-specific cryptographic key is used to sign multiple photos, those images can be linked by comparing the key. An adversary could potentially connect a photo someone posts publicly under their real name with a photo they post anonymously, deanonymizing the creator.
Our Solution: Unique Certificates. We eliminate this threat with a maximally private approach. Each key and certificate is used to sign exactly one image. No two images ever share the same public key, a "One-and-Done" Certificate Management Strategy, making it cryptographically impossible to link them. This engineering investment in user privacy is designed to set a clear standard for the industry.
Overall, you can use Content Credentials on Pixel 10 without fear that another person or Google could use it to link any of your images to you or one another.
Implementations of Content Credentials use trusted time-stamps to ensure the credentials can be validated even after the certificate used to produce them expires. Obtaining these trusted time-stamps typically requires connectivity to a Time-Stamping Authority (TSA) server. But what happens if the device is offline?
This is not a far-fetched scenario. Imagine you’ve captured a stunning photo of a remote waterfall. The image has Content Credentials that prove that it was captured by a camera, but the cryptographic certificate used to produce them will eventually expire. Without a time-stamp, that proof could become untrusted, and you're too far from a cell signal, which is required to receive one.
To solve this, Pixel developed an on-device, offline TSA.
Powered by the security features of Tensor, Pixel maintains a trusted clock in a secure environment, completely isolated from the user-controlled one in Android. The clock is synchronized regularly from a trusted source while the device is online, and is maintained even after the device goes offline (as long as the phone remains powered on). This allows your device to generate its own cryptographically-signed time-stamps the moment you press the shutter—no connection required. It ensures the story behind your photo remains verifiable and trusted after its certificate expires, whether you took it in your living room or at the top of a mountain.
C2PA Content Credentials are not the sole solution for identifying the provenance of digital media. They are, however, a tangible step toward more media transparency and trust as we continue to unlock more human creativity with AI.
In our initial implementation of Content Credentials on the Android platform and Pixel 10 lineup, we prioritized a higher standard of privacy, security, and usability. We invite other implementers of Content Credentials to evaluate our approach and leverage these same foundational hardware and software security primitives. The full potential of these technologies can only be realized through widespread ecosystem adoption.
We look forward to adding Content Credentials across more Google products in the near future.
Today marks a watershed moment and new benchmark for open-source security and the future of consumer electronics. Google is proud to announce that protected KVM (pKVM), the hypervisor that powers the Android Virtualization Framework, has officially achieved SESIP Level 5 certification. This makes pKVM the first software security system designed for large-scale deployment in consumer electronics to meet this assurance bar.
The implications for the future of secure mobile technology are profound. With this level of security assurance, Android is now positioned to securely support the next generation of high-criticality isolated workloads. This includes vital features, such as on-device AI workloads that can operate on ultra-personalized data, with the highest assurances of privacy and integrity.
This certification required a hands-on evaluation by Dekra, a globally recognized cybersecurity certification lab, which conducted an evaluation against the TrustCB SESIP scheme, compliant to EN-17927. Achieving Security Evaluation Standard for IoT Platforms (SESIP) Level 5 is a landmark because it incorporates AVA_VAN.5, the highest level of vulnerability analysis and penetration testing under the ISO 15408 (Common Criteria) standard. A system certified to this level has been evaluated to be resistant to highly skilled, knowledgeable, well-motivated, and well-funded attackers who may have insider knowledge and access.
This certification is the cornerstone of the next-generation of Android’s multi-layered security strategy. Many of the TEEs (Trusted Execution Environments) used in the industry have not been formally certified or have only achieved lower levels of security assurance. This inconsistency creates a challenge for developers looking to build highly critical applications that require a robust and verifiable level of security. The certified pKVM changes this paradigm entirely. It provides a single, open-source, and exceptionally high-quality firmware base that all device manufacturers can build upon.
Looking ahead, Android device manufacturers will be required to use isolation technology that meets this same level of security for various security operations that the device relies on. Protected KVM ensures that every user can benefit from a consistent, transparent, and verifiably secure foundation.
This achievement represents just one important aspect of the immense, multi-year dedication from the Linux and KVM developer communities and multiple engineering teams at Google developing pKVM and AVF. We look forward to seeing the open-source community and Android ecosystem continue to build on this foundation, delivering a new era of high-assurance mobile technology for users.
Today we're excited to announce OSS Rebuild, a new project to strengthen trust in open source package ecosystems by reproducing upstream artifacts. As supply chain attacks continue to target widely-used dependencies, OSS Rebuild gives security teams powerful data to avoid compromise without burden on upstream maintainers.
The project comprises:
Automation to derive declarative build definitions for existing PyPI (Python), npm (JS/TS), and Crates.io (Rust) packages.
SLSA Provenance for thousands of packages across our supported ecosystems, meeting SLSA Build Level 3 requirements with no publisher intervention.
Build observability and verification tools that security teams can integrate into their existing vulnerability management workflows.
Infrastructure definitions to allow organizations to easily run their own instances of OSS Rebuild to rebuild, generate, sign, and distribute provenance.
Open source software has become the foundation of our digital world. From critical infrastructure to everyday applications, OSS components now account for 77% of modern applications. With an estimated value exceeding $12 trillion, open source software has never been more integral to the global economy.
Yet this very ubiquity makes open source an attractive target: Recent high-profile supply chain attacks have demonstrated sophisticated methods for compromising widely-used packages. Each incident erodes trust in open ecosystems, creating hesitation among both contributors and consumers.
The security community has responded with initiatives like OpenSSF Scorecard, pypi's Trusted Publishers, and npm's native SLSA support. However, there is no panacea: Each effort targets a certain aspect of the problem, often making tradeoffs like shifting work onto publishers and maintainers.
Our aim with OSS Rebuild is to empower the security community to deeply understand and control their supply chains by making package consumption as transparent as using a source repository. Our rebuild platform unlocks this transparency by utilizing a declarative build process, build instrumentation, and network monitoring capabilities which, within the SLSA Build framework, produces fine-grained, durable, trustworthy security metadata.
Building on the hosted infrastructure model that we pioneered with OSS Fuzz for memory issue detection, OSS Rebuild similarly seeks to use hosted resources to address security challenges in open source, this time aimed at securing the software supply chain.
Our vision extends beyond any single ecosystem: We are committed to bringing supply chain transparency and security to all open source software development. Our initial support for the PyPI (Python), npm (JS/TS), and Crates.io (Rust) package registries—providing rebuild provenance for many of their most popular packages—is just the beginning of our journey.
Through automation and heuristics, we determine a prospective build definition for a target package and rebuild it. We semantically compare the result with the existing upstream artifact, normalizing each one to remove instabilities that cause bit-for-bit comparisons to fail (e.g. archive compression). Once we reproduce the package, we publish the build definition and outcome via SLSA Provenance. This attestation allows consumers to reliably verify a package's origin within the source history, understand and repeat its build process, and customize the build from a known-functional baseline (or maybe even use it to generate more detailed SBOMs).
With OSS Rebuild's existing automation for PyPI, npm, and Crates.io, most packages obtain protection effortlessly without user or maintainer intervention. Where automation isn't currently able to fully reproduce the package, we offer manual build specification so the whole community benefits from individual contributions.
And we are also excited at the potential for AI to help reproduce packages: Build and release processes are often described in natural language documentation which, while difficult to utilize with discrete logic, is increasingly useful to language models. Our initial experiments have demonstrated the approach's viability in automating exploration and testing, with limited human intervention, even in the most complex builds.
OSS Rebuild helps detect several classes of supply chain compromise:
Unsubmitted Source Code - When published packages contain code not present in the public source repository, OSS Rebuild will not attest to the artifact.
Real world attack: solana/webjs (2024)
Build Environment Compromise - By creating standardized, minimal build environments with comprehensive monitoring, OSS Rebuild can detect suspicious build activity or avoid exposure to compromised components altogether.
Real world attack: tj-actions/changed-files (2025)
Stealthy Backdoors - Even sophisticated backdoors like xz often exhibit anomalous behavioral patterns during builds. OSS Rebuild's dynamic analysis capabilities can detect unusual execution paths or suspicious operations that are otherwise impractical to identify through manual review.
Real world attack: xz-utils (2024)
For enterprises and security professionals, OSS Rebuild can...
Enhance metadata without changing registries by enriching data for upstream packages. No need to maintain custom registries or migrate to a new package ecosystem.
Augment SBOMs by adding detailed build observability information to existing Software Bills of Materials, creating a more complete security picture.
Accelerate vulnerability response by providing a path to vendor, patch, and re-host upstream packages using our verifiable build definitions.
For publishers and maintainers of open source packages, OSS Rebuild can...
Strengthen package trust by providing consumers with independent verification of the packages' build integrity, regardless of the sophistication of the original build.
Retrofit historical packages' integrity with high-quality build attestations, regardless of whether build attestations were present or supported at the time of publication.
Reduce CI security-sensitivity allowing publishers to focus on core development work. CI platforms tend to have complex authorization and execution models and by performing separate rebuilds, the CI environment no longer needs to be load-bearing for your packages' security.
The easiest (but not only!) way to access OSS Rebuild attestations is to use the provided Go-based command-line interface. It can be compiled and installed easily:
$ go install github.com/google/oss-rebuild/cmd/oss-rebuild@latest
You can fetch OSS Rebuild's SLSA Provenance:
$ oss-rebuild get cratesio syn 2.0.39
..or explore the rebuilt versions of a particular package:
$ oss-rebuild list pypi absl-py
..or even rebuild the package for yourself:
$ oss-rebuild get npm lodash 4.17.20 --output=dockerfile | \
docker run $(docker buildx build -q -)
OSS Rebuild is not just about fixing problems; it's about empowering end-users to make open source ecosystems more secure and transparent through collective action. If you're a developer, enterprise, or security researcher interested in OSS security, we invite you to follow along and get involved!
Check out the code, share your ideas, and voice your feedback at github.com/google/oss-rebuild.
Explore the data and contribute to improving support for your critical ecosystems and packages.
Learn more about SLSA Provenance at slsa.dev