Security Blog
The latest news and insights from Google on security and safety on the Internet
From Chrysaor to Lipizzan: Blocking a new targeted spyware family
26. Juli 2017
Posted by Megan Ruthven Android Security, Ken Bodzak Threat Analysis Group, Neel Mehta Threat Analysis Group
Android Security is always developing new ways of using data to find and block potentially harmful apps (PHAs) from getting onto your devices. Earlier this year,
we announced
we had blocked Chrysaor targeted spyware, believed to be written by NSO Group, a cyber arms company. In the course of our Chrysaor investigation, we used similar techniques to discover a new and unrelated family of spyware called Lipizzan. Lipizzan’s code contains references to a cyber arms company, Equus Technologies.
Lipizzan is a multi-stage spyware product capable of monitoring and exfiltrating a user’s email, SMS messages, location, voice calls, and media. We have found 20 Lipizzan apps distributed in a targeted fashion to fewer than 100 devices in total and have blocked the developers and apps from the Android ecosystem. Google Play Protect has notified all affected devices and removed the Lipizzan apps.
We’ve enhanced Google Play Protect’s capabilities to detect the targeted spyware used here and will continue to use this framework to block more targeted spyware. To learn more about the methods Google uses to find targeted mobile spyware like Chrysaor and Lipizzan, attend our BlackHat talk,
Fighting Targeted Malware in the Mobile Ecosystem
.
How does Lipizzan work?
Getting on a target device
Lipizzan was a sophisticated two stage spyware tool. The first stage found by Google Play Protect was distributed through several channels, including Google Play, and typically impersonated an innocuous-sounding app such as a "Backup” or “Cleaner” app. Upon installation, Lipizzan would download and load a second "license verification" stage, which would survey the infected device and validate certain abort criteria. If given the all-clear, the second stage would then root the device with known exploits and begin to exfiltrate device data to a Command & Control server.
Once implanted on a target device
The Lipizzan second stage was capable of performing and exfiltrating the results of the following tasks:
Call recording
VOIP recording
Recording from the device microphone
Location monitoring
Taking screenshots
Taking photos with the device camera(s)
Fetching device information and files
Fetching user information (contacts, call logs, SMS, application-specific data)
The PHA had specific routines to retrieve data from each of the following apps:
Gmail
Hangouts
KakaoTalk
LinkedIn
Messenger
Skype
Snapchat
StockEmail
Telegram
Threema
Viber
Whatsapp
We saw all of this behavior on a standalone stage 2 app, com.android.mediaserver (not related to
Android MediaServer
). This app shared a signing certificate with one of the stage 1 applications, com.app.instantbackup, indicating the same author wrote the two. We could use the following code snippet from the 2nd stage (com.android.mediaserver) to draw ties to the stage 1 applications.
Morphing first stage
After we blocked the first set of apps on Google Play, new apps were uploaded with a similar format but had a couple of differences.
The apps changed from ‘backup’ apps to looking like a “cleaner”, “notepad”, “sound recorder”, and “alarm manager” app. The new apps were uploaded within a week of the takedown, showing that the authors have a method of easily changing the branding of the implant apps.
The app changed from downloading an unencrypted stage 2 to including stage 2 as an encrypted blob. The new stage 1 would only decrypt and load the 2nd stage if it received an intent with an AES key and IV.
Despite changing the type of app and the method to download stage 2, we were able to catch the new implant apps soon after upload.
How many devices were affected?
There were fewer than 100 devices that checked into Google Play Protect with the apps listed below. That means the family affected only 0.000007% of Android devices. Since we identified Lipizzan, Google Play Protect removed Lipizzan from affected devices and actively blocks installs on new devices.
What can you do to protect yourself?
Ensure you are
opted into
Google Play Protect
.
Exclusively use the Google Play store. The chance you will install a PHA is much lower on Google Play than using other install mechanisms.
Keep “unknown sources” disabled while not using it.
Keep your phone patched to the latest Android security update.
List of samples
1st stage
Newer version
Standalone 2nd stage
Final removal of trust in WoSign and StartCom Certificates
20. Juli 2017
Posted by Andrew Whalley and Devon O'Brien, Chrome Security
As
previously announced
, Chrome has been in the process of removing trust from certificates issued by the CA WoSign and its subsidiary StartCom, as a result of several incidents not in keeping with the high standards expected of CAs.
We started the phase out in Chrome 56 by only trusting certificates issued prior to October 21st 2016, and subsequently restricted trust to a set of whitelisted hostnames based on the Alexa Top 1M. We have been reducing the size of the whitelist over the course of several Chrome releases.
Beginning with Chrome 61, the whitelist will be removed, resulting in full distrust of the existing WoSign and StartCom root certificates and all certificates they have issued.
Based on the
Chromium Development Calendar
, this change is visible in the
Chrome Dev channel
now, the Chrome Beta channel around late July 2017, and will be released to Stable around mid September 2017.
Sites still using StartCom or WoSign-issued certificates should consider replacing these certificates as a matter of urgency to minimize disruption for Chrome users.
Identifying Intrusive Mobile Apps Using Peer Group Analysis
12. Juli 2017
Posted by Martin Pelikan, Giles Hogben, and Ulfar Erlingsson of Google’s Security and Privacy team
Mobile apps entertain and assist us, make it easy to communicate with friends and family, and provide tools ranging from maps to electronic wallets. But these apps could also seek more device information than they need to do their job, such as personal data and sensor data from components, like cameras and GPS trackers.
To protect our users and help developers navigate this complex environment, Google analyzes privacy and security signals for each app in Google Play. We then compare that app to other apps with similar features, known as functional peers. Creating peer groups allows us to calibrate our estimates of users’ expectations and set adequate boundaries of behaviors that may be considered unsafe or intrusive. This process helps detect apps that collect or send sensitive data without a clear need, and makes it easier for users to find apps that provide the right functionality and respect their privacy. For example, most coloring book apps don’t need to know a user’s precise location to function and this can be established by analyzing other coloring book apps. By contrast, mapping and navigation apps need to know a user’s location, and often require GPS sensor access.
One way to create app peer groups is to create a fixed set of categories and then assign each app into one or more categories, such as tools, productivity, and games. However, fixed categories are too coarse and inflexible to capture and track the many distinctions in the rapidly changing set of mobile apps. Manual curation and maintenance of such categories is also a tedious and error-prone task.
To address this, Google developed a machine-learning algorithm for clustering mobile apps with similar capabilities. Our approach uses deep learning of vector embeddings to identify peer groups of apps with similar functionality, using app metadata, such as text descriptions, and user metrics, such as installs. Then peer groups are used to identify anomalous, potentially harmful signals related to privacy and security, from each app’s requested permissions and its observed behaviors. The correlation between different peer groups and their security signals helps different teams at Google decide which apps to promote and determine which apps deserve a more careful look by our security and privacy experts. We also use the result to help app developers improve the privacy and security of their apps.
Apps are split into groups of similar functionality, and in each cluster of similar apps the established baseline is used to find anomalous privacy and security signals.
These techniques build upon earlier ideas, such as using
peer groups
to analyze privacy-related signals,
deep learning for language models
to make those peer groups better, and
automated data analysis
to draw conclusions.
Many teams across Google collaborated to create this algorithm and the surrounding process. Thanks to several, essential team members including Andrew Ahn, Vikas Arora, Hongji Bao, Jun Hong, Nwokedi Idika, Iulia Ion, Suman Jana, Daehwan Kim, Kenny Lim, Jiahui Liu, Sai Teja Peddinti, Sebastian Porst, Gowdy Rajappan, Aaron Rothman, Monir Sharif, Sooel Son, Michael Vrable, and Qiang Yan.
For more information on Google’s efforts to detect and fight potentially harmful apps (PHAs) on Android, see
Google Android Security Team’s Classifications for Potentially Harmful Applications
.
References
S. Jana, Ú. Erlingsson, I. Ion (2015).
Apples and Oranges: Detecting Least-Privilege Violators with Peer Group Analysis
. arXiv:1510.07308 [cs.CR].
T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, J. Dean (2013).
Distributed Representations of Words and Phrases and their Compositionality
. Advances in Neural Information Processing Systems 26 (NIPS 2013).
Ú. Erlingsson (2016).
Data-driven software security: Models and methods
. Proceedings of the 29th IEEE Computer Security Foundations Symposium (CSF'16), Lisboa, Portugal.
Labels
#sharethemicincyber
#supplychain #security #opensource
android
android security
android tr
app security
big data
biometrics
blackhat
C++
chrome
chrome enterprise
chrome security
connected devices
CTF
diversity
encryption
federated learning
fuzzing
Gboard
google play
google play protect
hacking
interoperability
iot security
kubernetes
linux kernel
memory safety
Open Source
pha family highlights
pixel
privacy
private compute core
Rowhammer
rust
Security
security rewards program
sigstore
spyware
supply chain
targeted spyware
tensor
Titan M2
VDP
vulnerabilities
workshop
Archive
2024
Okt.
Sept.
Aug.
Juli
Juni
Mai
Apr.
März
Feb.
Jan.
2023
Dez.
Nov.
Okt.
Sept.
Aug.
Juli
Juni
Mai
Apr.
März
Feb.
Jan.
2022
Dez.
Nov.
Okt.
Sept.
Aug.
Juli
Juni
Mai
Apr.
März
Feb.
Jan.
2021
Dez.
Nov.
Okt.
Sept.
Aug.
Juli
Juni
Mai
Apr.
März
Feb.
Jan.
2020
Dez.
Nov.
Okt.
Sept.
Aug.
Juli
Juni
Mai
Apr.
März
Feb.
Jan.
2019
Dez.
Nov.
Okt.
Sept.
Aug.
Juli
Juni
Mai
Apr.
März
Feb.
Jan.
2018
Dez.
Nov.
Okt.
Sept.
Aug.
Juli
Juni
Mai
Apr.
März
Feb.
Jan.
2017
Dez.
Nov.
Okt.
Sept.
Juli
Juni
Mai
Apr.
März
Feb.
Jan.
2016
Dez.
Nov.
Okt.
Sept.
Aug.
Juli
Juni
Mai
Apr.
März
Feb.
Jan.
2015
Dez.
Nov.
Okt.
Sept.
Aug.
Juli
Juni
Mai
Apr.
März
Feb.
Jan.
2014
Dez.
Nov.
Okt.
Sept.
Aug.
Juli
Juni
Apr.
März
Feb.
Jan.
2013
Dez.
Nov.
Okt.
Aug.
Juni
Mai
Apr.
März
Feb.
Jan.
2012
Dez.
Sept.
Aug.
Juni
Mai
Apr.
März
Feb.
Jan.
2011
Dez.
Nov.
Okt.
Sept.
Aug.
Juli
Juni
Mai
Apr.
März
Feb.
2010
Nov.
Okt.
Sept.
Aug.
Juli
Mai
Apr.
März
2009
Nov.
Okt.
Aug.
Juli
Juni
März
2008
Dez.
Nov.
Okt.
Aug.
Juli
Mai
Feb.
2007
Nov.
Okt.
Sept.
Juli
Juni
Mai
Feed
Follow @google
Follow
Give us feedback in our
Product Forums
.