Providing safe experiences to billions of users and millions of Android developers has been one of the highest priorities for Google Play for many years. Last year we introduced new policies, improved our systems, and further optimized our processes to better protect our users, assist good developers and strengthen our guard against bad apps and developers. Additionally, in 2020, Google Play Protect scanned over 100B installed apps each day for malware across billions of devices.
Users come to Google Play to find helpful, reliable apps on everything from COVID-19 vaccine information to new forms of entertainment, grocery delivery, communication and more.
As such, we introduced a series of policies and new developer support to continue to elevate information quality on the platform and reduce the risk of user harm from misinformation.
Our core efforts around identifying and mitigating bad apps and developers continued to evolve to address new adversarial behaviors and forms of abuse. Our machine-learning detection capabilities and enhanced app review processes prevented over 962k policy-violating app submissions from getting published to Google Play. We also banned 119k malicious and spammy developer accounts. Additionally, we significantly increased our focus on SDK enforcement, as we've found these violations have an outsized impact on security and user data privacy.
Last year, we continued to reduce developer access to sensitive permissions. In February, we announced a new background location policy to ensure that apps requesting this permission need the data in order to provide clear user benefit. As a result of the new policy, developers now have to demonstrate that benefit and prominently tell users about it or face possible removal from Google Play. We've begun enforcement on apps not meeting new policy guidelines and will provide an update on the usage of this permission in a future blog post.
We've also continued to invest in protecting kids and helping parents find great content. In 2020 we launched a new kids tab filled with “Teacher approved” apps. To evaluate apps, we teamed with academic experts and teachers across the country, including our lead advisors, Joe Blatt (Harvard Graduate School of Education) and Dr. Sandra Calvert (Georgetown University).
As we continue to invest in protecting people from apps with harmful content, malicious behaviors, or threats to user privacy, we are also equally motivated to provide trusted experiences to Play developers. For example, we’ve improved our process for providing relevant information about enforcement actions we’ve taken, resulting in significant reduction in appeals and increased developer satisfaction. We will continue to enhance the speed and quality of our communications to developers, and continue listening to feedback about how we can further engage and elevate trusted developers. Android developers can expect to see more on this front in the coming year.
Our global teams of product managers, engineers, policy experts, and operations leaders are more excited than ever to advance the safety of the platform and forge a sustaining trust with our users. We look forward to building an even better Google Play experience.
With all of the challenges from this past year, users have become increasingly dependent on their mobile devices to create fitness routines, stay connected with loved ones, work remotely, and order things like groceries with ease. According to eMarketer, in 2020 users spent over three and a half hours per day using mobile apps. With so much time spent on mobile devices, ensuring the safety of mobile apps is more important than ever. Despite the importance of digital security, there isn’t a consistent industry standard for assessing mobile apps. Existing guidelines tend to be either too lightweight or too onerous for the average developer, and lack a compliance arm. That’s why we're excited to share ioXt’s announcement of a new Mobile Application Profile which provides a set of security and privacy requirements with defined acceptance criteria which developers can certify their apps against.
Over 20 industry stakeholders, including Google, Amazon, and a number of certified labs such as NCC Group and Dekra, as well as automated mobile app security testing vendors like NowSecure collaborated to develop this new security standard for mobile apps. We’ve seen early interest from Internet of Things (IoT) and virtual private network (VPN) developers, however the standard is appropriate for any cloud connected service such as social, messaging, fitness, or productivity apps.
The Internet of Secure Things Alliance (ioXt) manages a security compliance assessment program for connected devices. ioXt has over 300 members across various industries, including Google, Amazon, Facebook, T-Mobile, Comcast, Zigbee Alliance, Z-Wave Alliance, Legrand, Resideo, Schneider Electric, and many others. With so many companies involved, ioXt covers a wide range of device types, including smart lighting, smart speakers, and webcams, and since most smart devices are managed through apps, they have expanded coverage to include mobile apps with the launch of this profile.
The ioXt Mobile Application Profile provides a minimum set of commercial best practices for all cloud connected apps running on mobile devices. This security baseline helps mitigate against common threats and reduces the probability of significant vulnerabilities. The profile leverages existing standards and principles set forth by OWASP MASVS and the VPN Trust Initiative, and allows developers to differentiate security capabilities around cryptography, authentication, network security, and vulnerability disclosure program quality. The profile also provides a framework to evaluate app category specific requirements which may be applied based on the features contained in the app. For example, an IoT app only needs to certify under the Mobile Application profile, whereas a VPN app must comply with the Mobile Application profile, plus the VPN extension.
Certification allows developers to demonstrate product safety and we’re excited about the opportunity for this standard to push the industry forward. We observed that app developers were very quick to resolve any issues that were identified during their blackbox evaluations against this new standard, oftentimes with turnarounds in a matter of days. At launch, the following apps have been certified: Comcast, ExpressVPN, GreenMAX, Hubspace, McAfee Innovations, NordVPN, OpenVPN for Android, Private Internet Access, VPN Private, as well as the Google One app, including VPN by Google One.
We look forward to seeing adoption of the standard grow over time and for those app developers that are already investing in security best practices to be able to highlight their efforts. The standard also serves as a guiding light to inspire more developers to invest in mobile app security. If you are interested in learning more about the ioXt Alliance and how to get your app certified, visit https://compliance.ioxtalliance.org/sign-up and check out Android’s guidelines for building secure apps here.
In our previous post, we announced that Android now supports the Rust programming language for developing the OS itself. Related to this, we are also participating in the effort to evaluate the use of Rust as a supported language for developing the Linux kernel. In this post, we discuss some technical aspects of this work using a few simple examples.
C has been the language of choice for writing kernels for almost half a century because it offers the level of control and predictable performance required by such a critical component. Density of memory safety bugs in the Linux kernel is generally quite low due to high code quality, high standards of code review, and carefully implemented safeguards. However, memory safety bugs do still regularly occur. On Android, vulnerabilities in the kernel are generally considered high-severity because they can result in a security model bypass due to the privileged mode that the kernel runs in.
We feel that Rust is now ready to join C as a practical language for implementing the kernel. It can help us reduce the number of potential bugs and security vulnerabilities in privileged code while playing nicely with the core kernel and preserving its performance characteristics.
We developed an initial prototype of the Binder driver to allow us to make meaningful comparisons between the safety and performance characteristics of the existing C version and its Rust counterpart. The Linux kernel has over 30 million lines of code, so naturally our goal is not to convert it all to Rust but rather to allow new code to be written in Rust. We believe this incremental approach allows us to benefit from the kernel’s existing high-performance implementation while providing kernel developers with new tools to improve memory safety and maintain performance going forward.
We joined the Rust for Linux organization, where the community had already done and continues to do great work toward adding Rust support to the Linux kernel build system. We also need designs that allow code in the two languages to interact with each other: we're particularly interested in safe, zero-cost abstractions that allow Rust code to use kernel functionality written in C, and how to implement functionality in idiomatic Rust that can be called seamlessly from the C portions of the kernel.
Since Rust is a new language for the kernel, we also have the opportunity to enforce best practices in terms of documentation and uniformity. For example, we have specific machine-checked requirements around the usage of unsafe code: for every unsafe function, the developer must document the requirements that need to be satisfied by callers to ensure that its usage is safe; additionally, for every call to unsafe functions (or usage of unsafe constructs like dereferencing a raw pointer), the developer must document the justification for why it is safe to do so.
Just as important as safety, Rust support needs to be convenient and helpful for developers to use. Let’s get into a few examples of how Rust can assist kernel developers in writing drivers that are safe and correct.
We'll use an implementation of a semaphore character device. Each device has a current value; writes of n bytes result in the device value being incremented by n; reads decrement the value by 1 unless the value is 0, in which case they will block until they can decrement the count without going below 0.
Suppose semaphore is a file representing our device. We can interact with it from the shell as follows:
> cat semaphore
When semaphore is a newly initialized device, the command above will block because the device's current value is 0. It will be unblocked if we run the following command from another shell because it increments the value by 1, which allows the original read to complete:
> echo -n a > semaphore
We could also increment the count by more than 1 if we write more data, for example:
> echo -n abc > semaphore
increments the count by 3, so the next 3 reads won't block.
To allow us to show a few more aspects of Rust, we'll add the following features to our driver: remember what the maximum value was throughout the lifetime of a device, and remember how many reads each file issued on the device.
We'll now show how such a driver would be implemented in Rust, contrasting it with a C implementation. We note, however, we are still early on so this is all subject to change in the future. How Rust can assist the developer is the aspect that we'd like to emphasize. For example, at compile time it allows us to eliminate or greatly reduce the chances of introducing classes of bugs, while at the same time remaining flexible and having minimal overhead.
A developer needs to do the following to implement a driver for a new character device in Rust:
The following outlines how the first two steps of our example compare in Rust and C:
impl FileOpener<Arc<Semaphore>> for FileState { fn open( shared: &Arc<Semaphore> ) -> KernelResult<Box<Self>> { [...] } } impl FileOperations for FileState { type Wrapper = Box<Self>; fn read( &self, _: &File, data: &mut UserSlicePtrWriter, offset: u64 ) -> KernelResult<usize> { [...] } fn write( &self, data: &mut UserSlicePtrReader, _offset: u64 ) -> KernelResult<usize> { [...] } fn ioctl( &self, file: &File, cmd: &mut IoctlCommand ) -> KernelResult<i32> { [...] } fn release(_obj: Box<Self>, _file: &File) { [...] } declare_file_operations!(read, write, ioctl); }
static int semaphore_open(struct inode *nodp, struct file *filp) { struct semaphore_state *shared = container_of(filp->private_data, struct semaphore_state, miscdev); [...] } static ssize_t semaphore_write(struct file *filp, const char __user *buffer, size_t count, loff_t *ppos) { struct file_state *state = filp->private_data; [...] } static ssize_t semaphore_read(struct file *filp, char __user *buffer, size_t count, loff_t *ppos) { struct file_state *state = filp->private_data; [...] } static long semaphore_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { struct file_state *state = filp->private_data; [...] } static int semaphore_release(struct inode *nodp, struct file *filp) { struct file_state *state = filp->private_data; [...] } static const struct file_operations semaphore_fops = { .owner = THIS_MODULE, .open = semaphore_open, .read = semaphore_read, .write = semaphore_write, .compat_ioctl = semaphore_ioctl, .release = semaphore_release, };
Character devices in Rust benefit from a number of safety features:
For a driver to provide a custom ioctl handler, it needs to implement the ioctl function that is part of the FileOperations trait, as exemplified in the table below.
fn ioctl( &self, file: &File, cmd: &mut IoctlCommand ) -> KernelResult<i32> { cmd.dispatch(self, file) } impl IoctlHandler for FileState { fn read( &self, _file: &File, cmd: u32, writer: &mut UserSlicePtrWriter ) -> KernelResult<i32> { match cmd { IOCTL_GET_READ_COUNT => { writer.write( &self .read_count .load(Ordering::Relaxed))?; Ok(0) } _ => Err(Error::EINVAL), } } fn write( &self, _file: &File, cmd: u32, reader: &mut UserSlicePtrReader ) -> KernelResult<i32> { match cmd { IOCTL_SET_READ_COUNT => { self .read_count .store(reader.read()?, Ordering::Relaxed); Ok(0) } _ => Err(Error::EINVAL), } } }
#define IOCTL_GET_READ_COUNT _IOR('c', 1, u64) #define IOCTL_SET_READ_COUNT _IOW('c', 1, u64) static long semaphore_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { struct file_state *state = filp->private_data; void __user *buffer = (void __user *)arg; u64 value; switch (cmd) { case IOCTL_GET_READ_COUNT: value = atomic64_read(&state->read_count); if (copy_to_user(buffer, &value, sizeof(value))) return -EFAULT; return 0; case IOCTL_SET_READ_COUNT: if (copy_from_user(&value, buffer, sizeof(value))) return -EFAULT; atomic64_set(&state->read_count, value); return 0; default: return -EINVAL; } }
Ioctl commands are standardized such that, given a command, we know whether a user buffer is provided, its intended use (read, write, both, none), and its size. In Rust, we provide a dispatcher (accessible by calling cmd.dispatch) that uses this information to automatically create user memory access helpers and pass them to the caller.
A driver is not required to use this though. If, for example, it doesn't use the standard ioctl encoding, Rust offers the flexibility of simply calling cmd.raw to extract the raw arguments and using them to handle the ioctl (potentially with unsafe code, which will need to be justified).
However, if a driver implementation does use the standard dispatcher, it will benefit from not having to implement any unsafe code, and:
All of the above could potentially also be done in C, but it's very easy for developers to (likely unintentionally) break contracts that lead to unsafety; Rust requires unsafe blocks for this, which should only be used in rare cases and brings additional scrutiny. Additionally, Rust offers the following:
We allow developers to use mutexes and spinlocks to provide interior mutability. In our example, we use a mutex to protect mutable data; in the tables below we show the data structures we use in C and Rust, and how we implement a wait until the count is nonzero so that we can satisfy a read:
struct SemaphoreInner { count: usize, max_seen: usize, } struct Semaphore { changed: CondVar, inner: Mutex<SemaphoreInner>, } struct FileState { read_count: AtomicU64, shared: Arc<Semaphore>, }
struct semaphore_state { struct kref ref; struct miscdevice miscdev; wait_queue_head_t changed; struct mutex mutex; size_t count; size_t max_seen; }; struct file_state { atomic64_t read_count; struct semaphore_state *shared; };
fn consume(&self) -> KernelResult { let mut inner = self.shared.inner.lock(); while inner.count == 0 { if self.shared.changed.wait(&mut inner) { return Err(Error::EINTR); } } inner.count -= 1; Ok(()) }
static int semaphore_consume( struct semaphore_state *state) { DEFINE_WAIT(wait); mutex_lock(&state->mutex); while (state->count == 0) { prepare_to_wait(&state->changed, &wait, TASK_INTERRUPTIBLE); mutex_unlock(&state->mutex); schedule(); finish_wait(&state->changed, &wait); if (signal_pending(current)) return -EINTR; mutex_lock(&state->mutex); } state->count--; mutex_unlock(&state->mutex); return 0; }
We note that such waits are not uncommon in the existing C code, for example, a pipe waiting for a "partner" to write, a unix-domain socket waiting for data, an inode search waiting for completion of a delete, or a user-mode helper waiting for state change.
The following are benefits from the Rust implementation:
In the tables below, we show how open, read, and write are implemented in our example driver:
fn read( &self, _: &File, data: &mut UserSlicePtrWriter, offset: u64 ) -> KernelResult<usize> { if data.is_empty() || offset > 0 { return Ok(0); } self.consume()?; data.write_slice(&[0u8; 1])?; self.read_count.fetch_add(1, Ordering::Relaxed); Ok(1) }
static ssize_t semaphore_read(struct file *filp, char __user *buffer, size_t count, loff_t *ppos) { struct file_state *state = filp->private_data; char c = 0; int ret; if (count == 0 || *ppos > 0) return 0; ret = semaphore_consume(state->shared); if (ret) return ret; if (copy_to_user(buffer, &c, sizeof(c))) return -EFAULT; atomic64_add(1, &state->read_count); *ppos += 1; return 1; }
fn write( &self, data: &mut UserSlicePtrReader, _offset: u64 ) -> KernelResult<usize> { { let mut inner = self.shared.inner.lock(); inner.count = inner.count.saturating_add(data.len()); if inner.count > inner.max_seen { inner.max_seen = inner.count; } } self.shared.changed.notify_all(); Ok(data.len()) }
static ssize_t semaphore_write(struct file *filp, const char __user *buffer, size_t count, loff_t *ppos) { struct file_state *state = filp->private_data; struct semaphore_state *shared = state->shared; mutex_lock(&shared->mutex); shared->count += count; if (shared->count < count) shared->count = SIZE_MAX; if (shared->count > shared->max_seen) shared->max_seen = shared->count; mutex_unlock(&shared->mutex); wake_up_all(&shared->changed); return count; }
fn open( shared: &Arc<Semaphore> ) -> KernelResult<Box<Self>> { Ok(Box::try_new(Self { read_count: AtomicU64::new(0), shared: shared.clone(), })?) }
static int semaphore_open(struct inode *nodp, struct file *filp) { struct semaphore_state *shared = container_of(filp->private_data, struct semaphore_state, miscdev); struct file_state *state; state = kzalloc(sizeof(*state), GFP_KERNEL); if (!state) return -ENOMEM; kref_get(&shared->ref); state->shared = shared; atomic64_set(&state->read_count, 0); filp->private_data = state; return 0; }
They illustrate other benefits brought by Rust:
The examples above are only a small part of the whole project. We hope it gives readers a glimpse of the kinds of benefits that Rust brings. At the moment we have nearly all generic kernel functionality needed by Binder neatly wrapped in safe Rust abstractions, so we are in the process of gathering feedback from the broader Linux kernel community with the intent of upstreaming the existing Rust support.
We also continue to make progress on our Binder prototype, implement additional abstractions, and smooth out some rough edges. This is an exciting time and a rare opportunity to potentially influence how the Linux kernel is developed, as well as inform the evolution of the Rust language. We invite those interested to join us in Rust for Linux and attend our planned talk at Linux Plumbers Conference 2021!
Thanks Nick Desaulniers, Kees Cook, and Adrian Taylor for contributions to this post. Special thanks to Jeff Vander Stoep for contributions and editing, and to Greg Kroah-Hartman for reviewing and contributing to the code examples.
Correctness of code in the Android platform is a top priority for the security, stability, and quality of each Android release. Memory safety bugs in C and C++ continue to be the most-difficult-to-address source of incorrectness. We invest a great deal of effort and resources into detecting, fixing, and mitigating this class of bugs, and these efforts are effective in preventing a large number of bugs from making it into Android releases. Yet in spite of these efforts, memory safety bugs continue to be a top contributor of stability issues, and consistently represent ~70% of Android’s high severity security vulnerabilities.
In addition to ongoing and upcoming efforts to improve detection of memory bugs, we are ramping up efforts to prevent them in the first place. Memory-safe languages are the most cost-effective means for preventing memory bugs. In addition to memory-safe languages like Kotlin and Java, we’re excited to announce that the Android Open Source Project (AOSP) now supports the Rust programming language for developing the OS itself.
Managed languages like Java and Kotlin are the best option for Android app development. These languages are designed for ease of use, portability, and safety. The Android Runtime (ART) manages memory on behalf of the developer. The Android OS uses Java extensively, effectively protecting large portions of the Android platform from memory bugs. Unfortunately, for the lower layers of the OS, Java and Kotlin are not an option.
Lower levels of the OS require systems programming languages like C, C++, and Rust. These languages are designed with control and predictability as goals. They provide access to low level system resources and hardware. They are light on resources and have more predictable performance characteristics.For C and C++, the developer is responsible for managing memory lifetime. Unfortunately, it's easy to make mistakes when doing this, especially in complex and multithreaded codebases.
Rust provides memory safety guarantees by using a combination of compile-time checks to enforce object lifetime/ownership and runtime checks to ensure that memory accesses are valid. This safety is achieved while providing equivalent performance to C and C++.
C and C++ languages don’t provide these same safety guarantees and require robust isolation. All Android processes are sandboxed and we follow the Rule of 2 to decide if functionality necessitates additional isolation and deprivileging. The Rule of 2 is simple: given three options, developers may only select two of the following three options.
For Android, this means that if code is written in C/C++ and parses untrustworthy input, it should be contained within a tightly constrained and unprivileged sandbox. While adherence to the Rule of 2 has been effective in reducing the severity and reachability of security vulnerabilities, it does come with limitations. Sandboxing is expensive: the new processes it requires consume additional overhead and introduce latency due to IPC and additional memory usage. Sandboxing doesn’t eliminate vulnerabilities from the code and its efficacy is reduced by high bug density, allowing attackers to chain multiple vulnerabilities together.
Memory-safe languages like Rust help us overcome these limitations in two ways:
Of course, introducing a new programming language does nothing to address bugs in our existing C/C++ code. Even if we redirected the efforts of every software engineer on the Android team, rewriting tens of millions of lines of code is simply not feasible.
The above analysis of the age of memory safety bugs in Android (measured from when they were first introduced) demonstrates why our memory-safe language efforts are best focused on new development and not on rewriting mature C/C++ code. Most of our memory bugs occur in new or recently modified code, with about 50% being less than a year old.
The comparative rarity of older memory bugs may come as a surprise to some, but we’ve found that old code is not where we most urgently need improvement. Software bugs are found and fixed over time, so we would expect the number of bugs in code that is being maintained but not actively developed to go down over time. Just as reducing the number and density of bugs improves the effectiveness of sandboxing, it also improves the effectiveness of bug detection.
Bug detection via robust testing, sanitization, and fuzzing is crucial for improving the quality and correctness of all software, including software written in Rust. A key limitation for the most effective memory safety detection techniques is that the erroneous state must actually be triggered in instrumented code in order to be detected. Even in code bases with excellent test/fuzz coverage, this results in a lot of bugs going undetected.
Another limitation is that bug detection is scaling faster than bug fixing. In some projects, bugs that are being detected are not always getting fixed. Bug fixing is a long and costly process.
Each of these steps is costly, and missing any one of them can result in the bug going unpatched for some or all users. For complex C/C++ code bases, often there are only a handful of people capable of developing and reviewing the fix, and even with a high amount of effort spent on fixing bugs, sometimes the fixes are incorrect.
Bug detection is most effective when bugs are relatively rare and dangerous bugs can be given the urgency and priority that they merit. Our ability to reap the benefits of improvements in bug detection require that we prioritize preventing the introduction of new bugs.
Rust modernizes a range of other language aspects, which results in improved correctness of code:
Adding a new language to the Android platform is a large undertaking. There are toolchains and dependencies that need to be maintained, test infrastructure and tooling that must be updated, and developers that need to be trained. For the past 18 months we have been adding Rust support to the Android Open Source Project, and we have a few early adopter projects that we will be sharing in the coming months. Scaling this to more of the OS is a multi-year project. Stay tuned, we will be posting more updates on this blog.
Java is a registered trademark of Oracle and/or its affiliates.
Thanks Matthew Maurer, Bram Bonne, and Lars Bergstrom for contributions to this post. Special thanks to our colleagues, Adrian Taylor for his insight into the age of memory vulnerabilities, and to Chris Palmer for his work on “The Rule of 2” and “The limits of Sandboxing”.