Android 14 is the third major Android release with Rust support. We are already seeing a number of benefits:
These positive early results provided an enticing motivation to increase the speed and scope of Rust adoption. We hoped to accomplish this by investing heavily in training to expand from the early adopters.
Early adopters are often willing to accept more risk to try out a new technology. They know there will be some inconveniences and a steep learning curve but are willing to learn, often on their own time.
Scaling up Rust adoption required moving beyond early adopters. For that we need to ensure a baseline level of comfort and productivity within a set period of time. An important part of our strategy for accomplishing this was training. Unfortunately, the type of training we wanted to provide simply didn’t exist. We made the decision to write and implement our own Rust training.
Our goals for the training were to:
With those three goals as a starting point, we looked at the existing material and available tools.
Documentation is a key value of the Rust community and there are many great resources available for learning Rust. First, there is the freely available Rust Book, which covers almost all of the language. Second, the standard library is extensively documented.
Because we knew our target audience, we could make stronger assumptions than most material found online. We created the course for engineers with at least 2–3 years of coding experience in either C, C++, or Java. This allowed us to move quickly when explaining concepts familiar to our audience, such as "control flow", “stack vs heap”, and “methods”. People with other backgrounds can learn Rust from the many other resources freely available online.
For free-form documentation, mdBook has become the de facto standard in the Rust community. It is used for official documentation such as the Rust Book and Rust Reference.
A particularly interesting feature is the ability to embed executable snippets of Rust code. This is key to making the training engaging since the code can be edited live and executed directly in the slides:
In addition to being a familiar community standard, mdBook offers the following important features:
mdbook test
These features made it easy for us to choose mdBook. While mdBook is not designed for presentations, the output looked OK on a projector when we limited the vertical size of each page.
Android has developers and OEM partners in many countries. It is critical that they can adapt existing Rust code in AOSP to fit their needs. To support translations, we developed mdbook-i18n-helpers. Support for multilingual documentation has been a community wish since 2015 and we are glad to see the plugins being adopted by several other projects to produce maintainable multilingual documentation for everybody.
With the technology and format nailed down, we started writing the course. We roughly followed the outline from the Rust Book since it covered most of what we need to cover. This gave us a three day course which we called Rust Fundamentals. We designed it to run for three days for five hours a day and encompass Rust syntax, semantics, and important concepts such as traits, generics, and error handling.
We then extended Rust Fundamentals with three deep dives:
A large set of in-house and community translators have helped translate the course into several languages. The full translations were Brazilian Portuguese and Korean. We are working on Simplified Chinese and Traditional Chinese translations as well.
We started teaching the class in late 2022. In 2023, we hired a vendor, Immunant, to teach the majority of classes for Android engineers. This was important for scalability and for quality: dedicated instructors soon discovered where the course participants struggled and could adapt the delivery. In addition, over 30 Googlers have taught the course worldwide.
More than 500 Google engineers have taken the class. Feedback has been very positive: 96% of participants agreed it was worth their time. People consistently told us that they loved the interactive style, highlighting how it helped to be able to ask clarifying questions at any time. Instructors noted that people gave the course their undivided attention once they realized it was live. Live-coding demands a lot from the instructor, but it is worth it due to the high engagement it achieves.
Most importantly, people exited this course and were able to be immediately productive with Rust in their day jobs. When participants were asked three months later, they confirmed that they were able to write and review Rust code. This matched the results from the much larger survey we made in 2022.
We have been teaching Rust classes at Google for a year now. There are a few things that we want to improve: better topic ordering, more exercises, and more speaker notes. We would also like to extend the course with more deep dives. Pull requests are very welcome!
The full course is available for free at https://google.github.io/comprehensive-rust/. We are thrilled to see people starting to use Comprehensive Rust for classes around the world. We hope it can be a useful resource for the Rust community and that it will help both small and large teams get started on their Rust journey!
We are grateful to the 190+ contributors from all over the world who created more than 1,000 pull requests and issues on GitHub. Their bug reports, fixes, and feedback improved the course in countless ways. This includes the 50+ people who worked hard on writing and maintaining the many translations.
Special thanks to Andrew Walbran for writing Bare-metal Rust and to Razieh Behjati, Dustin Mitchell, and Alexandre Senges for writing Concurrency in Rust.
We also owe a great deal of thanks to the many volunteer instructors at Google who have been spending their time teaching classes around the globe. Your feedback has helped shape the course.
Finally, thanks to Jeffrey Vander Stoep, Ivan Lozano, Matthew Maurer, Dmytro Hrybenko, and Lars Bergstrom for providing feedback on this post.
When you import a third party library, do you review every line of code? Most software packages depend on external libraries, trusting that those packages aren’t doing anything unexpected. If that trust is violated, the consequences can be huge—regardless of whether the package is malicious, or well-intended but using overly broad permissions, such as with Log4j in 2021. Supply chain security is a growing issue, and we hope that greater transparency into package capabilities will help make secure coding easier for everyone.
Avoiding bad dependencies can be hard without appropriate information on what the dependency’s code actually does, and reviewing every line of that code is an immense task. Every dependency also brings its own dependencies, compounding the need for review across an expanding web of transitive dependencies. But what if there was an easy way to know the capabilities–the privileged operations accessed by the code–of your dependencies?
Capslock is a capability analysis CLI tool that informs users of privileged operations (like network access and arbitrary code execution) in a given package and its dependencies. Last month we published the alpha version of Capslock for the Go language, which can analyze and report on the capabilities that are used beneath the surface of open source software.
This CLI tool will provide deeper insights into the behavior of dependencies by reporting code paths that access privileged operations in the standard libraries. In upcoming versions we will add support for open source maintainers to prescribe and sandbox the capabilities required for their packages, highlighting to users what capabilities are present and alerting them if they change.
Vulnerability management is an important part of your supply chain security, but it doesn’t give you a full picture of whether your dependencies are safe to use. Adding capability analysis into your security posture, gives you a better idea of the types of behavior you can expect from your dependencies, identifies potential weak points, and allows you to make a more informed choice about using a given dependency.
Capslock is motivated by the belief that the principle of least privilege—the idea that access should be limited to the minimal set that is feasible and practical—should be a first-class design concept for secure and usable software. Applied to software development, this means that a package should be allowed access only to the capabilities that it requires as part of its core behaviors. For example, you wouldn’t expect a data analysis package to need access to the network or a logging library to include remote code execution capabilities.
Capslock is initially rolling out for Go, a language with a strong security commitment and fantastic tooling for finding known vulnerabilities in package dependencies. When Capslock is used alongside Go’s vulnerability management tools, developers can use the additional, complementary signals to inform how they interpret vulnerabilities in their dependencies.
These capability signals can be used to
Find code with the highest levels of access to prioritize audits, code reviews and vulnerability patches
Compare potential dependencies, or look for alternative packages when an existing dependency is no longer appropriate
Surface unwanted capability usage in packages to uncover new vulnerabilities or identify supply chain attacks in progress
Monitor for unexpected emerging capabilities due to package version or dependency changes, and even integrate capability monitoring into CI/CD pipelines
Filter vulnerability data to respond to the most relevant cases, such as finding packages with network access during a network-specific vulnerability alert
We are looking forward to adding new features in future releases, such as better support for declaring the expected capabilities of a package, and extending to other programming languages. We are working to apply Capslock at scale and make capability information for open source packages broadly available in various community tools like deps.dev.
You can try Capslock now, and we hope you find it useful for auditing your external dependencies and making informed decisions on your code’s capabilities.
We’ll be at Gophercon in San Diego on Sept 27th, 2023—come and chat with us!
Fuzzing is an effective technique for finding software vulnerabilities. Over the past few years Android has been focused on improving the effectiveness, scope, and convenience of fuzzing across the organization. This effort has directly resulted in improved test coverage, fewer security/stability bugs, and higher code quality. Our implementation of continuous fuzzing allows software teams to find new bugs/vulnerabilities, and prevent regressions automatically without having to manually initiate fuzzing runs themselves. This post recounts a brief history of fuzzing on Android, shares how Google performs fuzzing at scale, and documents our experience, challenges, and success in building an infrastructure for automating fuzzing across Android. If you’re interested in contributing to fuzzing on Android, we’ve included instructions on how to get started, and information on how Android’s VRP rewards fuzzing contributions that find vulnerabilities.
Fuzzing has been around for many years, and Android was among the early large software projects to automate fuzzing and prioritize it similarly to unit testing as part of the broader goal to make Android the most secure and stable operating system. In 2019 Android kicked off the fuzzing project, with the goal to help institutionalize fuzzing by making it seamless and part of code submission. The Android fuzzing project resulted in an infrastructure consisting of Pixel phones and Google cloud based virtual devices that enabled scalable fuzzing capabilities across the entire Android ecosystem. This project has since grown to become the official internal fuzzing infrastructure for Android and performs thousands of fuzzing hours per day across hundreds of fuzzers.
Step 1: Define and find all the fuzzers in Android repo
The first step is to integrate fuzzing into the Android build system (Soong) to enable build fuzzer binaries. While developers are busy adding features to their codebase, they can include a fuzzer to fuzz their code and submit the fuzzer alongside the code they have developed. Android Fuzzing uses a build rule called cc_fuzz (see example below). cc_fuzz (we also support rust_fuzz and java_fuzz) defines a Soong module with source file(s) and dependencies that can be built into a binary.
cc_fuzz { name: "fuzzer_foo", srcs: [ "fuzzer_foo.cpp", ], static_libs: [ "libfoo", ], host_supported: true, }
A packaging rule in Soong finds all of these cc_fuzz definitions and builds them automatically. The actual fuzzer structure itself is very simple and consists of one main method (LLVMTestOneInput):
#include <stddef.h> #include <stdint.h> extern "C" int LLVMFuzzerTestOneInput( const uint8_t *data, size_t size) { // Here you invoke the code to be fuzzed. return 0; }
This fuzzer gets automatically built into a binary and along with its static/dynamic dependencies (as specified in the Android build file) are packaged into a zip file which gets added to the main zip containing all fuzzers as shown in the example below.
Step 2: Ingest all fuzzers into Android builds
Once the fuzzers are found in the Android repository and they are built into binaries, the next step is to upload them to the cloud storage in preparation to run them on our backend. This process is run multiple times daily. The Android fuzzing infrastructure uses an open source continuous fuzzing framework (Clusterfuzz) to run fuzzers continuously on Android devices and emulators. In order to run the fuzzers on clusterfuzz, the fuzzers zip files are renamed after the build and the latest build gets to run (see diagram below):
The fuzzer zip file contains the fuzzer binary, corresponding dictionary as well as a subfolder containing its dependencies and the git revision numbers (sourcemap) corresponding to the build. Sourcemaps are used to enhance stack traces and produce crash reports.
Step 3: Run fuzzers continuously and find bugs
Running fuzzers continuously is done through scheduled jobs where each job is associated with a set of physical devices or emulators. A job is also backed by a queue that represents the fuzzing tasks that need to be run. These tasks are a combination of running a fuzzer, reproducing a crash found in an earlier fuzzing run, or minimizing the corpus, among other tasks.
Each fuzzer is run for multiple hours, or until they find a crash. After the run, Android fuzzing takes all of the interesting input discovered during the run and adds it to the fuzzer corpus. This corpus is then shared across fuzzer runs and grows over time. The fuzzer is then prioritized in subsequent runs according to the growth of new coverage and crashes found (if any). This ensures we provide the most effective fuzzers more time to run and find interesting crashes.
Step 4: Generate fuzzers line coverage
What good is a fuzzer if it’s not fuzzing the code you care about? To improve the quality of the fuzzer and to monitor the overall progress of Android fuzzing, two types of coverage metrics are calculated and available to Android developers. The first metric is for edge coverage which refers to edges in the Control Flow Graph (CFG). By instrumenting the fuzzer and the code being fuzzed, the fuzzing engine can track small snippets of code that get triggered every time execution flow reaches them. That way, fuzzing engines know exactly how many (and how many times) each of these instrumentation points got hit on every run so they can aggregate them and calculate the coverage.
INFO: Seed: 2859304549 INFO: Loaded 1 modules (773 inline 8-bit counters): 773 [0x5610921000, 0x5610921305), INFO: Loaded 1 PC tables (773 PCs): 773 [0x5610921308,0x5610924358), INFO: -max_len is not provided; libFuzzer will not generate inputs larger than 4096 bytes INFO: A corpus is not provided, starting from an empty corpus #2 INITED cov: 2 ft: 2 corp: 1/1b lim: 4 exec/s: 0 rss: 24Mb #413 NEW cov: 3 ft: 3 corp: 2/9b lim: 8 exec/s: 0 rss: 24Mb L: 8/8 MS: 1 InsertRepeatedBytes- #3829 NEW cov: 4 ft: 4 corp: 3/17b lim: 38 exec/s: 0 rss: 24Mb L: 8/8 MS: 1 ChangeBinInt- ...
Line coverage inserts instrumentation points specifying lines in the source code. Line coverage is very useful for developers as they can pinpoint areas in the code that are not covered and update their fuzzers accordingly to hit those areas in future fuzzing runs.
Drilling into any of the folders can show the stats per file:
Further clicking on one of the files shows the lines that were touched and lines that never got coverage. In the example below, the first line has been fuzzed ~5 million times, but the fuzzer never makes it into lines 3 and 4, indicating a gap in the coverage for this fuzzer.
We have dashboards internally that measure our fuzzing coverage across our entire codebase. In order to generate these coverage dashboards yourself, you follow these steps.
Another measurement of the quality of the fuzzers is how many fuzzing iterations can be done in one second. It has a direct relationship with the computation power and the complexity of the fuzz target. However, this parameter alone can not measure how good or effective the fuzzing is.
Android fuzzing utilizes the Clusterfuzz fuzzing infrastructure to handle any found crashes and file a ticket to the Android security team. Android security makes an assessment of the crash based on the Android Severity Guidelines and then routes the vulnerability to the proper team for remediation. This entire process of finding the reproducible crash, routing to Android Security, and then assigning the issue to a team responsible can take as little as two hours, and up to a week depending on the type of crash and the severity of the vulnerability.
One example of a recent fuzzer success is (CVE 2022-20473), where an internal team wrote a 20-line fuzzer and submitted it to run on Android fuzzing infra. Within a day, the fuzzer was ingested and pushed to our fuzzing infrastructure to begin fuzzing, and shortly found a critical severity vulnerability! A patch for this CVE has been applied by the service team.
The Android Open Source Project (AOSP) is a large and complex project with many contributors. As a result, there are thousands of changes made to the project every day. These changes can be anything from small bug fixes to large feature additions, and fuzzing helps to find vulnerabilities that may be inadvertently introduced and not caught during code review.
Continuous fuzzing has helped to find these vulnerabilities before they are introduced in production and exploited by attackers. One real-life example is (CVE-2023-21041), a vulnerability discovered by a fuzzer written three years ago. This vulnerability affected Android firmware and could have led to local escalation of privilege with no additional execution privileges needed. This fuzzer was running for many years with limited findings until a code regression led to the introduction of this vulnerability. This CVE has since been patched.
Android has been a huge proponent of Rust, with Android 13 being the first Android release with the majority of new code in a memory safe language. The amount of new memory-unsafe code entering Android has decreased, but there are still millions of lines of code that remain, hence the need for fuzzing persists.
Our work does not stop with non-memory unsafe languages, and we encourage fuzzer development in languages like Rust as well. While fuzzing won’t find common vulnerabilities that you would expect to see memory unsafe languages like C/C++, there have been numerous non-security issues discovered and remediated which contribute to the overall stability of Android.
In addition to generic C/C++ binaries issues such as missing dependencies, fuzzers can have their own classes of problems:
Low executions per second: in order to fuzz efficiently, the number of mutations has to be in the order of hundreds per second otherwise the fuzzing will take a very long time to cover the code. We addressed this issue by adding a set of alerts that continuously monitor the health of the fuzzers as well as any sudden drop in coverage. Once a fuzzer is identified as underperforming, an automated email is sent to the fuzzer author with details to help them improve the fuzzer.
Fuzzing the wrong code: Like all resources, fuzzing resources are limited. We want to ensure that those resources give us the highest return, and that generally means devoting them towards fuzzing code that processes untrusted (i.e. potentially attacker controlled) inputs. This can cover any way that the phone can receive input including Bluetooth, NFC, USB, web, etc. Parsing structured input is particularly interesting since there is room for programming errors due to specs complexity. Code that generates output is not particularly interesting to fuzz. Similarly internal code that is not exposed publicly is also less of a security concern. We addressed this issue by identifying the most vulnerable code (see the following section).
In order to fuzz the most important components of the Android source code, we focus on libraries that have:
We’re constantly writing and improving fuzzers internally to cover some of the most sensitive areas of Android, but there is always room for improvement. If you’d like to get started writing your own fuzzer for an area of AOSP, you’re welcome to do so to make Android more secure (example CL):
Get started by reading our documentation on Fuzzing with libFuzzer and check your fuzzer into the Android Open Source project. If your fuzzer finds a bug, you can submit it to the Android Bug Bounty Program and could be eligible for a reward!
Since 2016, OSS-Fuzz has been at the forefront of automated vulnerability discovery for open source projects. Vulnerability discovery is an important part of keeping software supply chains secure, so our team is constantly working to improve OSS-Fuzz. For the last few months, we’ve tested whether we could boost OSS-Fuzz’s performance using Google’s Large Language Models (LLM).
This blog post shares our experience of successfully applying the generative power of LLMs to improve the automated vulnerability detection technique known as fuzz testing (“fuzzing”). By using LLMs, we’re able to increase the code coverage for critical projects using our OSS-Fuzz service without manually writing additional code. Using LLMs is a promising new way to scale security improvements across the over 1,000 projects currently fuzzed by OSS-Fuzz and to remove barriers to future projects adopting fuzzing.
We created the OSS-Fuzz service to help open source developers find bugs in their code at scale—especially bugs that indicate security vulnerabilities. After more than six years of running OSS-Fuzz, we now support over 1,000 open source projects with continuous fuzzing, free of charge. As the Heartbleed vulnerability showed us, bugs that could be easily found with automated fuzzing can have devastating effects. For most open source developers, setting up their own fuzzing solution could cost time and resources. With OSS-Fuzz, developers are able to integrate their project for free, automated bug discovery at scale.
Since 2016, we’ve found and verified a fix for over 10,000 security vulnerabilities. We also believe that OSS-Fuzz could likely find even more bugs with increased code coverage. The fuzzing service covers only around 30% of an open source project’s code on average, meaning that a large portion of our users’ code remains untouched by fuzzing. Recent research suggests that the most effective way to increase this is by adding additional fuzz targets for every project—one of the few parts of the fuzzing workflow that isn’t yet automated.
When an open source project onboards to OSS-Fuzz, maintainers make an initial time investment to integrate their projects into the infrastructure and then add fuzz targets. The fuzz targets are functions that use randomized input to test the targeted code. Writing fuzz targets is a project-specific and manual process that is similar to writing unit tests. The ongoing security benefits from fuzzing make this initial investment of time worth it for maintainers, but writing a comprehensive set of fuzz targets is a tough expectation for project maintainers, who are often volunteers.
But what if LLMs could write additional fuzz targets for maintainers?
To discover whether an LLM could successfully write new fuzz targets, we built an evaluation framework that connects OSS-Fuzz to the LLM, conducts the experiment, and evaluates the results. The steps look like this:
Experiment overview: The experiment pictured above is a fully automated process, from identifying target code to evaluating the change in code coverage.
At first, the code generated from our prompts wouldn’t compile; however, after several rounds of prompt engineering and trying out the new fuzz targets, we saw projects gain between 1.5% and 31% code coverage. One of our sample projects, tinyxml2, went from 38% line coverage to 69% without any interventions from our team. The case of tinyxml2 taught us: when LLM-generated fuzz targets are added, tinyxml2 has the majority of its code covered.
Example fuzz targets for tinyxml2: Each of the five fuzz targets shown is associated with a different part of the code and adds to the overall coverage improvement.
To replicate tinyxml2’s results manually would have required at least a day’s worth of work—which would mean several years of work to manually cover all OSS-Fuzz projects. Given tinyxml2’s promising results, we want to implement them in production and to extend similar, automatic coverage to other OSS-Fuzz projects.
Additionally, in the OpenSSL project, our LLM was able to automatically generate a working target that rediscovered CVE-2022-3602, which was in an area of code that previously did not have fuzzing coverage. Though this is not a new vulnerability, it suggests that as code coverage increases, we will find more vulnerabilities that are currently missed by fuzzing.
Learn more about our results through our example prompts and outputs or through our experiment report.
In the next few months, we’ll open source our evaluation framework to allow researchers to test their own automatic fuzz target generation. We’ll continue to optimize our use of LLMs for fuzzing target generation through more model finetuning, prompt engineering, and improvements to our infrastructure. We’re also collaborating closely with the Assured OSS team on this research in order to secure even more open source software used by Google Cloud customers.
Our longer term goals include:
Adding LLM fuzz target generation as a fully integrated feature in OSS-Fuzz, with continuous generation of new targets for OSS-fuzz projects and zero manual involvement.
Extending support from C/C++ projects to additional language ecosystems, like Python and Java.
Automating the process of onboarding a project into OSS-Fuzz to eliminate any need to write even initial fuzz targets.
We’re working towards a future of personalized vulnerability detection with little manual effort from developers. With the addition of LLM generated fuzz targets, OSS-Fuzz can help improve open source security for everyone.
As part of our effort to deploy quantum resistant cryptography, we are happy to announce the release of the first quantum resilient FIDO2 security key implementation as part of OpenSK, our open source security key firmware. This open-source hardware optimized implementation uses a novel ECC/Dilithium hybrid signature schema that benefits from the security of ECC against standard attacks and Dilithium’s resilience against quantum attacks. This schema was co-developed in partnership with the ETH Zürich and won the ACNS secure cryptographic implementation workshop best paper.
Quantum processor
As progress toward practical quantum computers is accelerating, preparing for their advent is becoming a more pressing issue as time passes. In particular, standard public key cryptography which was designed to protect against traditional computers, will not be able to withstand quantum attacks. Fortunately, with the recent standardization of public key quantum resilient cryptography including the Dilithium algorithm, we now have a clear path to secure security keys against quantum attacks.
While quantum attacks are still in the distant future, deploying cryptography at Internet scale is a massive undertaking which is why doing it as early as possible is vital. In particular, for security keys this process is expected to be gradual as users will have to acquire new ones once FIDO has standardized post quantum cryptography resilient cryptography and this new standard is supported by major browser vendors.
Hybrid signature: Strong nesting with classical and PQC scheme
Our proposed implementation relies on a hybrid approach that combines the battle tested ECDSA signature algorithm and the recently standardized quantum resistant signature algorithm, Dilithium. In collaboration with ETH, we developed this novel hybrid signature schema that offers the best of both worlds. Relying on a hybrid signature is critical as the security of Dilithium and other recently standardized quantum resistant algorithms haven’t yet stood the test of time and recent attacks on Rainbow (another quantum resilient algorithm) demonstrate the need for caution. This cautiousness is particularly warranted for security keys as most can’t be upgraded – although we are working toward it for OpenSK. The hybrid approach is also used in other post-quantum efforts like Chrome’s support for TLS.
On the technical side, a large challenge was to create a Dilithium implementation small enough to run on security keys’ constrained hardware. Through careful optimization, we were able to develop a Rust memory optimized implementation that only required 20 KB of memory, which was sufficiently small enough. We also spent time ensuring that our implementation signature speed was well within the expected security keys specification. That said, we believe improving signature speed further by leveraging hardware acceleration would allow for keys to be more responsive.
Moving forward, we are hoping to see this implementation (or a variant of it), being standardized as part of the FIDO2 key specification and supported by major web browsers so that users' credentials can be protected against quantum attacks. If you are interested in testing this algorithm or contributing to security key research, head to our open source implementation OpenSK.