Security Blog
The latest news and insights from Google on security and safety on the Internet
Announcing "Browser Security Handbook"
10 de dezembro de 2008
Posted by Michael Zalewski, Security Team.
Many people view the task of writing secure web applications as a very complex challenge - in part because of the inherent shortcomings of technologies such as HTTP, HTML, or Javascript, and in part because of the subtle differences and unexpected interactions between various browser security mechanisms.
Through the years, we found that having a full understanding of browser-specific quirks is critical to making sound security design decisions in modern
Web 2.0
applications. For example, the same user-supplied link may appear to one browser as a harmless relative address, while another could interpret it as a potentially malicious Javascript payload. In another case, an application may rely on a particular HTTP request that is impossible to spoof from within the browser in order to defend the security of its users. However, an attacker might easily subvert the safeguard by crafting the same request from within commonly installed browser extensions. If not accounted for, these differences can lead to trouble.
In hopes of helping to make the Web a safer place, we decided to release our
Browser Security Handbook
to the general public. This 60-page document provides a comprehensive comparison of a broad set of security features and characteristics in commonly used browsers, along with (hopefully) useful commentary and implementation tips for application developers who need to rely on these mechanisms, as well as engineering teams working on future browser-side security enhancements.
Please note that given the sheer number of characteristics covered, we expect some kinks in the initial version of the handbook; feedback from browser vendors and security researchers is greatly appreciated.
Native Client: A Technology for Running Native Code on the Web
8 de dezembro de 2008
Posted by Brad Chen, Native Client Team.
Most native applications can access everything on your computer – including your files. This access means that you have to make decisions about which apps you trust enough to install, because a malicious or buggy application might harm your machine. Here at Google we believe you shouldn't have to choose between powerful applications and security. That's why we're working on
Native Client
, a technology that seeks to give Web developers the opportunity to make safer and more dynamic applications that can run on any OS and any browser. Today, we're sharing our technology with the research and security communities for their feedback to help make this technology more useful and more secure.
Our approach is built around a software containment system called the inner-sandbox that is designed to prevent unintended interactions between a native code module and the host system. The inner-sandbox uses static analysis to detect security defects in untrusted x86 code. Previously, such analysis has been challenging due to such practices as self-modifying code and overlapping instructions. In our work, we disallow such practices through a set of alignment and structural rules that, when observed, enable the native code module to be disassembled reliably and all reachable instructions to be identified during disassembly. With reliable disassembly as a tool, it's then feasible for the validator to determine whether the executable includes unsafe x86 instructions. For example, the validator can determine whether the executable includes instructions that directly invoke the operating system that could read or write files or subvert the containment system itself.
To learn more and help test Native Client, check out our
post on the Google Code blog
as well as our
developer site
. Our developer site includes our research paper and of course the source for the project under the BSD license.
We look forward to hearing what you think!
User Experience in the Identity Community
2 de dezembro de 2008
Eric Sachs & Ben Laurie, Google Security
One of the major conferences on Internet identity standards is the
Internet Identity Workshop
(IIW), a semiannual 'un-conference' where the sessions are not determined ahead of time. It is attended by a large set of people who work on Internet security and identity standards such as OAuth, OpenID, SAML, InfoCards, etc. A major theme within the identity community this year has been about improving the user experience and growing the adoption of these technologies. The OpenID community is making great progress on user experience, with Yahoo, AOL, and Google quickly improving the support they provide (read a
summary
from Joseph Smarr of Plaxo). Similarly, the InfoCard community has been working on simplifying the user experience of InfoCard technology, including the
updated
CardSpace selector from Microsoft.
Another hot topic at IIW centered around
how to improve the user experience when testing alternatives and enhancements to passwords to make them less susceptible to phishing attacks. Many websites and enterprises have tried these password enhancements/alternatives, but they found that people complained that they were hard to use, or that they weren't portable enough for people who use multiple computers, including web cafes and smart phones. We have published an
article
summarizing some of the community's current ideas for how to deploy these new authentication mechanisms using a multi-layered approach that minimizes additional work required by users. We have also pulled together a set of
videos
showing how a number of these different approaches work with both web-based and desktop applications. We hope this information will be helpful to other websites and enterprises who are concerned about phishing.
Gmail security and recent phishing activity
25 de novembro de 2008
Posted by Chris Evans
We've seen some speculation recently about a purported security vulnerability in Gmail and the theft of several website owners' domains by unauthorized third parties. At Google we're committed to providing secure products, and we mounted an immediate investigation. Our results indicate no evidence of a Gmail vulnerability.
With help from affected users, we determined that the cause was a phishing scheme, a common method used by malicious actors to trick people into sharing their sensitive information. Attackers sent customized e-mails encouraging web domain owners to visit fraudulent websites such as "google-hosts.com" that they set up purely to harvest usernames and passwords. These fake sites had no affiliation with Google, and the ones we've seen are now offline. Once attackers gained the user credentials, they were free to modify the affected accounts as they desired. In this case, the attacker set up mail filters specifically designed to forward messages from web domain providers.
Several news stories referenced a
domain theft from December 2007
that was incorrectly linked to a Gmail CSRF vulnerability. We did have a Gmail CSRF bug reported to us in September 2007 that we fixed worldwide within 24 hours of private disclosure of the bug details. Neither this bug nor any other Gmail bug was involved in the December 2007 domain theft.
We recognize how many people depend on Gmail, and we strive to make it as secure as possible. At this time, we'd like to thank the wider security community for working with us to achieve this goal. We're always looking at new ways to enhance Gmail security. For example, we recently gave users the option to
always run their entire session using https
.
To keep your Google account secure online, we recommend you only ever enter your Gmail sign-in credentials to web addresses starting with https://www.google.com/accounts, and never click-through any warnings your browser may raise about certificates. For more information on how to stay safe from phishing attacks, see our blog post
here
.
OAuth for Secure Mashups
18 de novembro de 2008
Posted by Eric Sachs, Senior Product Manager, Google Security
A year ago, a number of large and small websites announced a new open standard called
OAuth
. This standard is designed to provide a secure and privacy-preserving technique for enabling specific private data on one site to be accessed by another site. One popular reason for that type of cross-site access is data portability in areas such as personal health records (such as Google Health or Microsoft Healthvault), as well as social networks (such as OpenSocial enabled sites). I originally became involved in this space in the summer of 2005, when Google started developing a feature called
AuthSub
, which was one of the pre-cursors of OAuth. That was a proprietary protocol, but one that has been used by hundreds of websites to provide add-on services to Google Account users by getting permission from users to access data in their Google Accounts. In fact, that was the key feature that a few of us used to start the Google Health portability effort back when it was only a prototype project with a few dedicated Googlers.
However, with the development of a common Internet standard in OAuth, we see much greater potential for data portability and secure mash-ups. Today we
announced
that the gadget platform now supports OAuth, and the interoperability of this standard was demonstrated by new iGoogle gadgets that AOL and MySpace both built to enable users to see their respective AOL or MySpace mailboxes (and other information) while on iGoogle. However, to ensure the user's privacy, this only works after the user has authorized AOL or MySpace to make their data available to the gadget running on iGoogle. We also previously
announced
that third-party developers can build their own iGoogle gadgets that access the OAuth-enabled APIs for Google applications such as Calendar, Picasa, and Docs. In fact, since both the gadget platform and OAuth technology are open standards, we are working to help other companies who run services similar to iGoogle to enhance them with support for these standards. Once that is in place, these new OAuth-powered gadgets that are available on iGoogle will also work on those other sites, including many of the gadgets that Google offers for its own applications. This provides a platform for some interesting mash-ups. For example, a third-party developer could create a single gadget that uses OAuth to access both Google OAuth-enabled APIs (such as a Gmail user's
address book
) and
MySpace OAuth-enabled APIs
(such as a user's friend list) and display a mashup of the combination.
While the combination of OAuth with gadgets is an exciting new use of the technology, most of the use of OAuth is between websites, such as to enable a user of Google Health to allow a clinical trial matching site to access his or her health profile. I previously mentioned that one privacy control provided by OAuth is that it defines a standard way for users to authorize one website to make their data accessible to another website. In addition, OAuth provides a way to do this without the first site needing to reveal the identity of the user -- it simply provides a different opaque security token to each additional website the user wants to share his or her data with. It would allow a mutual fund, for example, to provide an iGoogle gadget to their customers that would run on iGoogle and show the user the value of his or her mutual fund, but without giving Google any unique information about the user, such as a social security number or account number. In the future, maybe we will even see industries like banks use standards such as OAuth to allow their customers to authorize utility companies to perform direct debit from the user's bank account without that person having to actually share his or her bank account number with the utility vendor.
The OAuth community is continuing to enhance this standard and is very interested in having more companies engaged with its development. The
OAuth.net
website has more details about the current standard, and I maintain a
website
with advanced information about Google's use of OAuth, including work on integrating OAuth with desktop apps, and integrating with federation standards such as OpenID and SAML. If you're interested in engaging with the OAuth community, please get in touch with us.
Malware? We don't need no stinking malware!
24 de outubro de 2008
Written by Oliver Fisher
"This site may harm your computer"
You may have seen those words in Google search results — but what do they mean? If you click the search result link you get another warning page instead of the website you were expecting. But if the web page was your grandmother's baking blog, you're still confused. Surely your grandmother hasn't been secretly honing her l33t computer hacking skills at night school. Google must have made a mistake and your grandmother's web page is just fine...
I work with the team that helps put the warning in Google's search results, so let me try to explain. The good news is that your grandmother is still kind and
loves turtles
. She isn't trying to start a botnet or steal credit card numbers. The bad news is that her website or the server that it runs on probably has a security vulnerability, most likely from some out-of-date software. That vulnerability has been exploited and malicious code has been added to your grandmother's website. It's most likely an invisible script or iframe that pulls content from another website that tries to attack any computer that views the page. If the attack succeeds, then viruses, spyware, key loggers, botnets, and other nasty stuff will get installed.
If you see the warning on a site in Google's search results, it's a good idea to pay attention to it. Google has automatic scanners that are constantly looking for these sorts of web pages. I help build the scanners and continue to be surprised by how accurate they are. There is almost certainly something wrong with the website even if it is run by someone you trust. The automatic scanners make unbiased decisions based on the malicious content of the pages, not the reputation of the webmaster.
Servers are just like your home computer and need constant updating. There are lots of tools that make building a website easy, but each one adds some risk of being exploited. Even if you're diligent and keep all your website components updated, your web host may not be. They control your website's server and may not have installed the most recent OS patches. And it's not just innocent grandmothers that this happens to. There have been warnings on the websites of banks, sports teams, and corporate and government websites.
Uh-oh... I need help!
Now that we understand what the malware label means in search results, what do you do if you're a webmaster and Google's scanners have found malware on your site?
There are some resources to help clean things up. The Google Webmaster Central blog has
some tips
and a
quick security checklist for webmasters
.
Stopbadware.org
has great information, and their
forums
have a number of helpful and knowledgeable volunteers who may be able to help (sometimes I'm one of them). You can also use the Google SafeBrowsing diagnostics page for your site (http://www.google.com/safebrowsing/diagnostic?site=
<site-name-here>
) to see specific information about what Google's automatic scanners have found. If your site has been flagged, Google's
Webmaster Tools
lists some of the URLs that were scanned and found to be infected.
Once you've cleaned up your website, use Google's
Webmaster Tools
to
request a malware review
. The automatic systems will rescan your website and the warning will be removed if the malware is gone.
Advance warning
I often hear webmasters asking Google for advance warning before a malware label is put on their website. When the label is applied, Google usually
emails the website owners
and then posts a warning in Google's
Webmaster Tools
. But no warning is given ahead of time -
before
the label is applied - so a webmaster can't quickly clean up the site before a warning is applied.
But, look at the situation from the user's point of view. As a user, I'd be pretty annoyed if Google sent me to a site it knew was dangerous. Even a short delay would expose some users to that risk, and it doesn't seem justified. I know it's frustrating for a webmaster to see a malware label on their website. But, ultimately, protecting users against malware makes the internet a safer place and everyone benefits, both webmasters and users.
Google's
Webmaster Tools
has started a test to provide
warnings to webmasters
that their server software may be vulnerable. Responding to that warning and updating server software can prevent your website from being compromised with malware. The best way to avoid a malware label is to never have any malware on the site!
Reviews
You can request a review via Google's
Webmaster Tools
and you can see the status of the review there. If you think the review is taking too long, make sure to check the status. Finding all the malware on a site is difficult and the automated scanners are far more accurate than humans. The scanners may have found something you've missed and the review may have failed. If your site has a malware label, Google's
Webmaster Tools
will also list some sample URLs that have problems. This is not a full list of all of the problem URLs (because that's often very, very long), but it should get you started.
Finally, don't confuse a malware review with a
request for reconsideration
. If Google's automated scanners find malware on your website, the site will usually not be removed from search results. There is also a different process that removes spammy websites from Google search results. If that's happened and you disagree with Google, you should submit a
reconsideration request
. But if your site has a malware label, a reconsideration request won't do any good — for malware you need to file a malware review from the Overview page.
How long will a review take?
Webmasters are eager to have a Google malware label removed from their site and often ask how long a review of the site will take. Both the original scanning and the review process are fully automated. The systems analyze large portions of the internet, which is big place, so the review may not happen immediately. Ideally, the label will be removed within a few hours. At its longest, the process should take a day or so.
New spam and virus trends from Enterprise
12 de agosto de 2008
Written by Amanda Kleha, Google Apps Security & Compliance team
The
Google Apps Security & Compliance
team, which provides email and web security for more than 40,000 companies, regularly tracks trends in spam, viruses, and other threats. Check out some of our latest findings over on the
Enterprise blog
. Also, on Friday, August 15, at 10:00 am PT, we'll be hosting a
webinar
on keeping your business safe from web and email threats -- tune in if you'd like to learn more.
Keyczar: Safe and Simple Cryptography
11 de agosto de 2008
Written by Steve Weis
Cryptography is notoriously hard to get right and if improperly used, can create serious security holes. Common mistakes include using the wrong cipher modes or obsolete algorithms, composing primitives in an unsafe manner, hard-coding keys in source code, or failing to anticipate the need for future key rotation. With these risks in mind, we're pleased to announce the open-source release of
Keyczar
.
Keyczar is a cryptographic toolkit that supports encryption and authentication for both symmetric and public-key algorithms. It addresses some of the aforementioned issues by choosing safe defaults, tagging outputs with key version information, and providing a simple application programming interface. Keyczar's key versioning system makes it easy to rotate and revoke keys, without worrying about backward compatibility or making any changes to source code.
We look forward to working with the open source community and continuing to make cryptography safer and easier to use. To download Keyczar or for more information, please visit our
Google Code project
and
discussion group
.
Are you using the latest web browser?
16 de julho de 2008
Written by Thomas Duebendorfer
In view of mass defacements of hundreds of thousand of web pages - with the intent to misuse them to launch drive-by download attacks - security researchers from ETH Zurich, Google, and IBM Internet Security Systems were interested in looking at the other side of the attack: the web browser. By analyzing the web browser versions seen in visits to Google websites, they have shown that more than 600 million Internet users don't use the latest version of their browser.
Slow migration to latest browser version
The researchers' paper, entitled
"Understanding the Web Browser Threat"
, shows that as of June 2008, only 59.1% percent of Internet users worldwide use the latest major version of their preferred web browser. Firefox users are the most attentive: 92.2% of them surfed with Firefox 2, the latest major version before the recently released 3.0. Only 52.5% of Microsoft Internet Explorer users have updated to version 7, which is the most secure according to multiple publicly-cited Microsoft experts (among them Sandi Hardmeier). The study revealed that 637 million Internet users worldwide who use web browsers are either not running the latest version of their preferred browser or have not installed the latest patches. These users are vulnerable to exploitation due to their web browser's "built-in" vulnerabilities and the lack of more recent security mechanisms such as improved phishing protection.
Neglected security patches
Over the past 18 months, the study also shows, a maximum of 83.3% of Firefox users were using the latest major version of the web browser and also had all current patches installed (i.e. latest minor version). Only 56.1% and 47.6% of Opera and Internet Explorer users, respectively, were similarly utilizing fully-patched web browsers. Apple users are no better: since the public release of Safari 3, only 65.3% of users operate the latest Safari version.
Maximum measured share of users surfing the web with the most secure versions of Firefox, Safari, Opera and Internet Explorer in June 2008 as seen on Google websites.
Obsolete browser warning
The study's most important finding is that technical measures now in place do not sufficiently guarantee browser security, and that users' security awareness must be further developed. The problem is that most users are unaware that they are not using their browser's latest version. It must be made clear to web browser users that outdated software is associated with significantly higher risk. The researchers therefore suggest that, as a critical component of web software, a visible warning be instituted that warns the user of missing security patches in a way analogous to the 'best before' date in the perishable food industry. Software updates must also be made easier to find. The resulting transparency would go far in contributing to end user awareness of software weaknesses, and allow users to better evaluate risks.
Example "best before" implementation on a Web browser
As a side effect, having users migrate faster to the latest browser version would not only increase security but also make the lives of webmasters easier, as they would need to test and optimize websites for fewer older versions of web browsers.
Meet ratproxy, our passive web security assessment tool
1 de julho de 2008
Posted by Michal Zalewski
We're happy to announce that we've just open-sourced
ratproxy
, a passive web application security assessment tool that we've been using internally at Google. This utility, developed by our information security engineering team, is designed to transparently analyze legitimate, browser-driven interactions with a tested web property and automatically pinpoint, annotate, and prioritize potential flaws or areas of concern.
The proxy analyzes problems such as cross-site script inclusion threats, insufficient cross-site request forgery defenses, caching issues, cross-site scripting candidates, potentially unsafe cross-domain code inclusion schemes and information leakage scenarios, and much more. (A more-detailed discussion of these features and information on securing vulnerable applications is provided
here
.) Compared with more-traditional active crawlers, or with fully manual request inspection and modification frameworks, this approach offers several significant advantages in terms of minimized overhead; marginalized risk of site disruptions; high coverage of complex, client-driven application states in web 2.0 solutions; and insight into dynamic cross-domain trust models.
We decided to make this tool freely available as open source because we feel it will be a valuable contribution to the information security community, helping advance the community's understanding of security challenges associated with contemporary web technologies. We believe that responsible security research brings a net overall benefit to the safety of the Web as a whole, and have released this tool explicitly to support that kind of research.
To download the proxy, please visit this
page
. Also, please keep in mind that the proxy is designed solely to highlight interesting patterns in web applications, and a further analysis by a security professional is often required to interpret the results and their significance for the tested platform.
Safe Browsing Diagnostic To The Rescue
15 de maio de 2008
Posted by Niels Provos
We've been protecting Google users from malicious web pages since 2006 by showing warning labels in Google's search results and by publishing the data via the
Safe Browsing API
to client programs such as Firefox and Google Desktop Search. To create our data, we've built a large-scale infrastructure to automatically determine if web pages pose a risk to users. This system has proven to be highly accurate, but we've noted that it can sometimes be difficult for webmasters and users to verify our results, as attackers often use sophisticated obfuscation techniques or inject malicious payloads only under certain conditions. With that in mind, we've developed a Safe Browsing diagnostic page that will provide detailed information about our automatic investigations and findings.
The
Safe Browsing diagnostic page
of a site is structured into four different categories:
What is the current listing status for [the site in question]?
We display the current listing status of a site and also information on how often a site or parts of it were listed in the past.
What happened when Google visited this site?
This section includes information on when we analyzed the page, when it was last malicious, what kind of malware we encountered and so fourth. To help web masters clean up their site, we also provide information about the sites that were serving malicious software to users and which sites might have served as intermediaries.
Has this site acted as an intermediary resulting in further distribution of malware?
Here we provide information if this site has facilitated the distribution of malicious software in the past. This could be an advertising network or statistics site that accidentally participated in the distribution of malicious software.
Has this site hosted malware?
Here we provide information if the the site has hosted malicious software in the past. We also provide information on the victim sites that initiated the distribution of malicious software.
All information we show is historical over the last ninety days but does not go further into the past. Initially, we are making the Safe Browsing diagnostic page available in two ways. We are adding a link on the
interstitial
page a user sees after clicking on a search result with a warning label, and also via an "additional information" link in Firefox 3's warning page. Of course, for anyone who wants to know more about how our detection system works, we also provide a detailed
tech report [pdf]
including an overview of the detection system and in-depth data analysis.
Contributing To Open Source Software Security
5 de maio de 2008
Written by Will Drewry
From
operating systems
to
web browsers
, open source software plays a critical role in the operation of the Internet. The security of open source software is therefore quite important, as it often interacts with personal information -- ranging from credit card numbers to medical records -- that needs to be kept safe. There has been a long-lived discussion on whether open source software is inherently more secure than closed source software. While popular opinion has begun to tilt in favor of openness, there are still arguments for both sides. Instead of diving into those treacherous waters (or giving weight to the idea of "inherent security"), I'd like to focus on the fruits of this extensive discussion. In particular, David A. Wheeler laid out a "bottom line" in his
Secure Programming for Linux and Unix HOWTO
which applies to both open and closed source software. It predicates real security in software on three actions:
people need to actually review the code
developers/reviewers need to know how to write secure code
once found, security problems need to be fixed quickly, and their fixes distributed quickly
While distilling anything down to three steps makes it seem easy, this isn't necessarily the case. Given how important open source software is to Google, we've attempted to contribute to this bottom line. As Chris
said before
, our engineers are encouraged to contribute both software and time to open source efforts. We
regularly submit
the results of our automated and manual security analysis of open source software back to the community, including related software engineering time. In addition, our engineering teams frequently release software under open source licenses. This software was written either with security in mind, such as with
security testing tools
, or by engineers well-versed in the
security challenges
of their project.
These efforts leave one area completely unaddressed -- getting security problems fixed quickly, and then getting those fixes distributed quickly. It has been unclear how to best resolve this issue. There is no centralized security authority for open source projects, and operating system distribution publishers are the best bet for getting updates to the highest number of users. Even if users can get updates in this manner, how should a security researcher contact a particular project's author? If there's a potential, security-related issue, who can help evaluate the risk for a project? What resources are there for projects that have been compromised, but have no operational security background?
I'm proud to announce that Google has sponsored participation in oCERT, the
open source computer emergency response team
. oCERT is a volunteer workforce of security professionals from the open source community with the goal of providing security vulnerability mediation and incident response services to open source projects. It will strive to contact software authors with all security reports and aid in debugging and patching, especially in cases where the author, or the reporter, doesn't have a background in security. Reliable contacts for projects, publishers, and vendors will be maintained where possible and used for notification when issues arise and fixes are available for mediated issues. Additionally, oCERT will aid projects of any size with responses to security incidents, such as server compromises.
It is my hope that this initiative will not only aid in remediating security issues in a timely fashion, but also provide a means for additional security contributions to the open source community.
All Your
iFrame
Are Point to Us
11 de fevereiro de 2008
Written by Niels Provos, Anti-Malware Team
It has been over a year and a half since we started to identify web pages that infect vulnerable hosts via
drive-by downloads
, i.e. web pages that attempt to exploit their visitors by installing and running malware automatically. During that time we have investigated billions of URLs and found more than three million unique URLs on over 180,000 web sites automatically installing malware. During the course of our research, we have investigated not only the prevalence of drive-by downloads but also how users are being exposed to malware and how it is being distributed. Our research paper is currently under peer review, but we are making a
technical report [PDF]
available now. Although our technical report contains a lot more detail, we present some high-level findings here:
Search Results Containing a URL Labeled as Harmful
The above graph shows the percentage of daily queries that contain at least one search result labeled as harmful. In the past few months, more than 1% of all search results contained at least one result that we believe to point to malicious content and the trend seems to be increasing.
Browsing Habits
Good computer hygiene, such as running automatic updates for the operating system and third-party applications, as well as installing anti-virus products goes a long way in protecting your home computer. However, we have been wondering if users' browsing habits impact the likelihood of encountering malicious web pages. To study this aspect, we took a sample of ~7 million URLs and mapped them to
DMOZ
categories. Although we found that adult web pages may increase the risk of exploitation, each DMOZ category was affected.
Malicious Content Injection
To understand if malicious content on a web server is due to poor web server security, we analyzed the version numbers reported by web servers on which we found malicious pages. Specifically, we looked at the Apache and the PHP versions exported as part of a server's response. We found that over 38% of both Apache and PHP versions were outdated increasing the risk of remote content injection to these servers.
Our "
Ghost In the Browser [PDF]
" paper highlighted third-party content as one potential vector of malicious content. Today, a lot of third-party content is due to advertising. To assess the extent to which advertising contributes to drive-by downloads, we analyze the distribution chain of malware, i.e. all the intermediary URLs a browser downloads before reaching a malware payload. We inspected each distribution chain for membership in about 2,000 known advertising networks. If any URL in the distribution chain corresponds to a known advertising network, we count the whole page as being infectious due to Ads. In our analysis, we found that on average 2% of malicious web sites were delivering malware via advertising. The underlying problem is that advertising space is often syndicated to other parties who are not known to the web site owner. Although non-syndicated advertising networks such as Google Adwords are not affected, any advertising networks practicing syndication needs to carefully study this problem. Our
technical report [PDF]
contains more detail including an analysis based on the popularity of web sites.
Structural Properties of Malware Distribution
Finally, we also investigated the structural properties of malware distribution sites. Some malware distribution sites had as many as 21,000 regular web sites pointing to them. We also found that the majority of malware was hosted on web servers located in China. Interestingly, Chinese malware distribution sites are mostly pointed to by Chinese web servers.
We hope that an analysis such as this will help us to better understand the malware problem in the future and allow us to protect users all over the Internet from malicious web sites as best as we can. One thing is clear - we have a lot of work ahead of us.
Marcadores
#sharethemicincyber
#supplychain #security #opensource
android
android security
android tr
app security
big data
biometrics
blackhat
C++
chrome
chrome enterprise
chrome security
connected devices
CTF
diversity
encryption
federated learning
fuzzing
Gboard
google play
google play protect
hacking
interoperability
iot security
kubernetes
linux kernel
memory safety
Open Source
pha family highlights
pixel
privacy
private compute core
Rowhammer
rust
Security
security rewards program
sigstore
spyware
supply chain
targeted spyware
tensor
Titan M2
VDP
vulnerabilities
workshop
Archive
2024
dez.
nov.
out.
set.
ago.
jul.
jun.
mai.
abr.
mar.
fev.
jan.
2023
dez.
nov.
out.
set.
ago.
jul.
jun.
mai.
abr.
mar.
fev.
jan.
2022
dez.
nov.
out.
set.
ago.
jul.
jun.
mai.
abr.
mar.
fev.
jan.
2021
dez.
nov.
out.
set.
ago.
jul.
jun.
mai.
abr.
mar.
fev.
jan.
2020
dez.
nov.
out.
set.
ago.
jul.
jun.
mai.
abr.
mar.
fev.
jan.
2019
dez.
nov.
out.
set.
ago.
jul.
jun.
mai.
abr.
mar.
fev.
jan.
2018
dez.
nov.
out.
set.
ago.
jul.
jun.
mai.
abr.
mar.
fev.
jan.
2017
dez.
nov.
out.
set.
jul.
jun.
mai.
abr.
mar.
fev.
jan.
2016
dez.
nov.
out.
set.
ago.
jul.
jun.
mai.
abr.
mar.
fev.
jan.
2015
dez.
nov.
out.
set.
ago.
jul.
jun.
mai.
abr.
mar.
fev.
jan.
2014
dez.
nov.
out.
set.
ago.
jul.
jun.
abr.
mar.
fev.
jan.
2013
dez.
nov.
out.
ago.
jun.
mai.
abr.
mar.
fev.
jan.
2012
dez.
set.
ago.
jun.
mai.
abr.
mar.
fev.
jan.
2011
dez.
nov.
out.
set.
ago.
jul.
jun.
mai.
abr.
mar.
fev.
2010
nov.
out.
set.
ago.
jul.
mai.
abr.
mar.
2009
nov.
out.
ago.
jul.
jun.
mar.
2008
dez.
nov.
out.
ago.
jul.
mai.
fev.
2007
nov.
out.
set.
jul.
jun.
mai.
Feed
Follow @google
Follow
Give us feedback in our
Product Forums
.