Like Yeah

UPDATE Cara Mendownload File Cara nya Cukup mudah yaitu klik tombol Skip Ad / Lewati dan Tunggu 5 Detik .
UPDATE How To Download File His manner was easy enough that click Skip Ad button / Skip and Wait 5 Seconds .
PENTING ! KILK TOMBOL LIKE DAN SUBCRIBE MY BLOOGER DAN FOLLOW MY FACEBOOK AND INSTAGRAM . TERMIAKASIH ANDA TELAH MENGUNJUNGI BLOGGER KECIL INI �� .
IMPORTANT! CLICK LIKE AND SUBSCRIBE MY BLOG AND FOLLOW MY FACEBOOK AND Instagram. THANK YOU FOR VISITING THIS LITTLE BLOGGER �� .
Please help me weeks to promote the blogger page to purchase a .com domain please click on the button below to donate to build a mini website thank you for your attention may god bless you. amen

A grateful season

The holidays are a time for giving, and Googlers across the globe have found some creative ways to give back to their communities this season. From raising money and crafting greeting cards to building gingerbread houses and giving blood, Googlers from east to west have been busy spreading good cheer. We've highlighted just a few of these efforts here, and we're looking forward to many more opportunities to give back in the new year.

London
The UK engineering recruitment team started to plan its annual Secret Santa gift exchange. But as they began thinking about last year, they realized that hardly anyone on the team could remember what they'd received, let alone given. Instead of spending 10 pounds on gag gifts, they decided to use the money to make a difference. After discovering that a local children's hospital was in desperate need of gifts, they quickly raised enough money to buy a Nintendo Wii gaming console for one of the wards.


Mexico City
In the past, Google has held a "Doodle 4 Google" contest in the US, the UK, and Australia, inviting kids K-12 to submit a homepage doodle inspired by a particular theme. This year Mexico held its first such contest (theme: "the Mexico we want"). For each doodle submitted, Google donated to a non-profit that works to eradicate childhood malnutrition in Mexico. In total, more than 70,000 kilos (154,000 pounds) of food and aid were donated. Winner, Ana Karen Villagómez, was recently recognized in a ceremony in Mexico City; her doodle (pictured below) will appear on the Google homepage on January 6.


Boston and beyond
Boston Googlers delivered gifts to some very grateful students at a local school and spent the morning reading and playing with the children. The Chicago office held its first-ever holiday blood drive, donating 36 units of blood. And the Ann Arbor office held a "CANstruction" competition, creating sculptures out of canned food, personal items and baby items, which were all later donated.



We hope that your holiday season is filled with plenty of time to slow down and reflect on what's important to you, and that you too feel inspired to find ways to give back to your own community in the new year.

Tracking Santa: the backstory

When I look back on four years of tracking Old St. Nick on Christmas Eve, I can't help but smile. The Santa tracker has really come a long way. I always thought NORAD's Santa Tracker was a great holiday tradition, but I felt like it could have been even better if people could visualize exactly where Santa was on Christmas Eve. So in 2004, shortly after Keyhole was acquired by Google, we followed Santa in the "Keyhole Earth Viewer" — Google Earth's original name — and we called it the "Keyhole Santa Radar." The audience was relatively small since Keyhole was still a for-pay service at that point, and we hosted everything on a single machine shared with the Keyhole Community BBS server. We probably should have had three separate servers to host the Santa tracker — that first year, we had only a portion of a single machine. That night, about 25,000 people kept tabs on Santa and, needless to say, wreaked some havoc on our servers!

Over the next two years, our Santa-tracking efforts improved dramatically. By December 2005, Keyhole had become Google Earth and our audience had become much, much larger. Our "Santa Radar" team also grew: we used greatly improved icons from Dennis Hwang, the Google Doodler, and set up 20 machines to serve the tracking information. My colleague Michael Ashbridge took over the software and more than 250,000 people tracked Santa on Google Earth that Christmas Eve. In 2006, Google acquired SketchUp, a 3D modeling software that enabled us to include models of Santa's North Pole workshop and sleigh. We also incorporated a tracking feed directly from NORAD's headquarters, and we were now displaying NORAD's information in Google Earth. That year, more than a million people tracked Santa.

In 2007, Google became NORAD's official Santa Tracking technology partner and hosted www.noradsanta.org. In addition to tracking Santa in Google Earth, we added a Google Maps tracker and integrated YouTube videos into the journey as well. Now, we had Santa on the map and on "Santa Cam" arriving in several different locations around the world, with commentary in six different languages. The heavy traffic — several millions of users — put Google's infrastructure to the test, but with some heroic work by our system reliability engineers, the Santa Tracker worked continuously.

This year, Googler Bruno Bowden is in charge of the Santa software, and we have further upgraded our server capacity. We're hoping this version of the tracker will be the best yet. In addition to our "Santa Cam" footage, geo-located photos from Panoramio will be viewable in Google Maps for each of Santa's stops that don't include video. We've also included a few new ways to track Santa. With Google Maps for mobile, anyone can keep tabs on him from their mobile phones (just activate GMM and search for "norad santa"). You can also receive updates from "Bitz the Elf" on Twitter by following @noradsanta. And of course, be sure to visit www.noradsanta.org tomorrow morning starting at 6:00 am EST when Santa's journey begins. Enjoy, and see you in 2009!

New search-by-style options for Google Image Search

Many of us use Google Image Search to find imagery of people, clip art for presentations, diagrams for reports, and of course symbols and patterns for artistic inspiration. Unfortunately, searching for the perfect image can be challenging if the search results match the meaning of your query but aren't in a style that's useful to you. So some time ago we launched face search, which lets you limit your search results to only images containing faces (see a search without and with this option). More recently we also rolled out photo search, which limits results to images that contain photographic elements, ignoring many cartoons and drawings which may not be useful to you (see a search without and with this option).

Today we're pleased to extend this capability to clip art and line drawings. To see the effect of these new options, let's take a look at the first few results for "Christmas," one of our most popular queries on Image Search right now.

Photo content

Clip art

Line drawing


All of these options can be selected from the "Any content" drop down in the blue title bar on any search results page, or by selecting one of the "Content types" on the Advanced Image Search page. The good news: no extra typing! In all these examples our query remained exactly the same, we just restricted our results to different visual styles. So whether you're interested holiday wreaths, Celtic patterns, or office clip art, it just became a lot easier to find the images you're looking for.

Black Googlers Network: building community

We believe great ideas can come from anywhere and everyone. And we aspire to be an organization that reflects global diversity, because we know that a world's worth of perspectives, ideas and cultures leads to the creation of better products and services. We have more than a dozen employee-driven resource groups, from Gayglers to GWE (Google Women Engineers), that actively participate around the world in building community and driving policy at Google. This is the next post in our Interface series, which takes a look at valuing people's similarities and differences in the workplace. For more information on how Google fosters an inclusive work environment, visit Life at Google on our Jobs site. – Ed.

It's been a busy few months for the Black Googlers Network (BGN). One of our group's core goals is to build a community that keeps us connected, facilitates the sharing of ideas, and participates in community outreach. We sponsored a variety of events this fall across many of our offices, giving us the opportunity to give back and have some fun while doing it.

To kick things off, a group of us from the Mountain View, New York, Ann Arbor, Chicago and Atlanta offices, to name a few, rolled up our sleeves for our first annual service trip. We headed to New Orleans in September to aid in the Hurricane Katrina rebuilding efforts. Undeterred by Hurricane Gustav, which unexpectedly hit the coast the week before we arrived, we managed to make some adjustments to flights and itineraries and were some of the first volunteers back into the city.

We partnered with the St. Bernard Project, learning everything from how to lay flooring to installing drywall as we worked on three homes. Additionally, we joined a strategy session with The Idea Village, helping them kick off their newest initiative, the 504ward Project. The opportunity to serve the community in such a meaningful way while getting to know BGN members was unique. We each put our minds, bodies, and souls into the city and the experience.



Next, BGN participated in the United Negro College Fund's annual Walk-a-thon in Oakland, CA. Our Google-UNCF partnership also includes an annual scholarship for college students pursing a degree in engineering or computer science, and we're continuing to explore different ways to support and encourage underrepresented students.

This month, we're coming together in many of our offices for the holidays, giving ourselves a chance to catch up and take stock of the work we've done over the past year. Not to be slowed down for too long, though, as we will soon begin the exciting process of planning our new initiatives for 2009.

Blog gadget 2.0

Back in September we introduced an iGoogle gadget that makes it possible to read recent posts from all of our corporate blogs, right on your dashboard. With the help of developer Ben Lisbakken, we're ready to roll out the next version of the gadget, which translates posts into 34 languages. Using Google Translate, the gadget gives people all over the world access to posts they might otherwise be unable to read. The default setting translates posts into the language in which your browser is set, but you can also choose from any of our supported languages by going into the "Edit" setting (found in the "Menu" arrow in the right-hand corner). If you want to learn more about Google in Latin America or AdWords in Russia, for example, but haven't had the chance to learn Spanish or Russian, give the gadget a spin. While machine translation is not exact, and we're constantly working to improve the quality, hopefully this new feature lets you get the gist of the post.

Here's a list of the supported languages:
Arabic, Bulgarian, Catalan, Chinese, Chinese (simplified), Chinese (traditional), Croatian, Czech, Danish, Dutch, English, Finnish, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Latvian, Lithuanian, Norwegian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Ukrainian, Vietnamese


Just choose the category of blogs you would like to read and click the "Translate" button.


The gadget will translate the posts and give you the option to "Revert" back to the original language. And to read the entire blog in translation, just click on the blog title beneath the post.


We hope you have fun exploring the entire Google blogosphere.

Jean Bartik: the untold story of a remarkable ENIAC programmer

This guest post was written by Kathy Kleiman, who discovered the ENIAC Programmers 20 years ago and founded the ENIAC Programmers Project to record their stories and produce the first feature documentary about their work. More at www.eniacprogrammers.org. – Ed.

"For many years in the computing industry, the hardware was it, the software was considered an auxiliary thing."
– Jean Bartik

For more than 50 years, the women of Electronic Numerical Integrator And Computer (ENIAC) were forgotten, and their role in programming the first all-electronic programmable computer and creating the software industry lost. But this fall, old met young, and a great computer pioneer met today's Internet pioneers. It happened in Silicon Valley and it happened at Google.

A little over a month ago, the Computer History Museum (CHM) in Mountain View honored Jean Bartik with its Fellows Award. This lifetime achievement award recognized her work as a programmer of the ENIAC and leader of the team to convert ENIAC to a stored program machine.

The Fellows Award was a rousing celebration of Bartik, Bob Metcalfe and Linus Torvalds. The next night, Bartik returned to CHM to discuss her life story in An Evening with Jean Jennings Bartik, ENIAC Pioneer. More than 400 people attended. They laughed at Bartik's descriptions of the ENIAC Programmers' exploits and enjoyed her stories of “Technical Camelot,” Bartik's description of her days at Eckert and Mauchly Computer Corporation in the 1950s. This video captures the evening:





During the Q&A session, one audience member asked: “If you were working today, where would you want to work?” Without hesitation, Bartik replied “Google!” with a huge smile. Googlers in the audience cheered.

Two days later, Bartik and I went to Google. We were met by our hosts, Ellen Spertus, Robin Jeffries, Peter Toole and Stephanie Williams, and whisked onto the campus past scrolling screens of Google searches and beach volleyball courts.

In the cafeteria, two dozen Google Women Engineers joined us. They pushed their chairs close to Bartik and leaned in to catch every word. Bartik regaled them with stories of computing's pioneers – the genius of John Mauchly and J. Presper Eckert, co-inventors of the computer, and the ingenuity of Betty Holberton and Kay Mauchly Antonelli, fellow programmers and software creators. She shared the joys and struggles of those who created the computer industry.



After lunch we toured the campus. Bartik enjoyed seeing where Googlers program work and the videoconferencing equipment they use to talk with colleagues around the world.

It is a visit we will never forget, and for me, its own moment in history. Twenty years ago, I discovered the ENIAC Programmers and learned their untold story. I founded the ENIAC Programmers Project to record their histories, seek recognition for them and produce the first feature documentary of their story. Our website provides more information about the documentary, WWII-era pictures and an opportunity to help change history. The stories Bartik shared with Googlers that day belong to the world.

Picasa 3 (and name tags) go global

A few months back, we announced some pretty big upgrades to Picasa and Picasa Web Albums for English-speaking users in the U.S. On the PC side, we rolled out a brand-new version of Picasa, with a slew of new tools like effortless web sync, movie editing, and photo-retouching capabilities. On the web, we launched "name tags," a new feature that automatically helps organize your photo collection based on who's in each of your pictures.

Today, just in time for your holiday snapshots, these changes (and more!) are available in all of the 38 languages we currently support. If you've been waiting to try the new photo-collage feature in Picasa, or been curious to see how clustering technology can automatically find similar faces across your photo collection, now's the time to download Picasa 3.1 or opt in to name tags on Picasa Web Albums.

Of course, having a truly global audience sharing and commenting on photos is one of the things that makes Picasa special. The people and places you'll spot on our Explore page attest to this, as do the multilingual comments users receive on their most popular public albums. That's why we just launched automatic comment translation on Picasa Web Albums, which harnesses Google Translate to make sure you know that "美麗的落日" means "Beautiful sunset!"

In fact, if you look closely, you'll see that we've recently rolled out a number of other small but meaningful changes across Picasa Web Albums, in all 38 languages -- ranging from improved sharing to better video playback. Swing by the Google Photos blog to learn more about what's new.

(Or, if you speak British or American English, Dutch, French, German, Italian, Japanese, Korean, Polish, Brazilian or European Portuguese, Russian, Spanish, Simplified Chinese, Traditional Chinese, Turkish, Danish , Finnish, Norwegian, Swedish, Bulgarian, Catalan, Croatian, Czech, Greek, Hindi, Hungarian, Indonesian, Latvian, Lithuanian, Romanian, Serbian, Slovakian, Slovenian, Tagolog, Thai, Vietnamese, or Ukrainian, just visit Picasa Web Albums and see for yourself!)



Gingerbread architecture for all

(Cross-posted from the Google SketchUp Blog)

Chilly weather, hot chocolate, holiday celebrations... I'm proud to kick off our sweetest SketchUp modeling challenge ever: the first annual Google SketchUp Gingerbread House Design Competition. To make it a little easier to design the gingerbread house of your dreams, I modeled a blank house to get you started. Go ahead and download it from the 3D Warehouse, then follow the instructions in the file.



I also built a selection of decorations (candy canes, gumdrops, wafer roof tiles) that you can use to spiff up your model. Of course, you're welcome to do anything you like; it's your masterpiece. When you're finished, don't forget to label your gingerbread house with the tag "gingerbread2009" and upload it to the 3D Warehouse. The competition deadline is January 4th at midnight, Pacific Standard Time.

This undertaking is all about having fun with SketchUp, so the prizes will be glory based. (What did you expect: a gingerbread flat-screen TV?) We'll award the following prizes, and announce the winners here and on the SketchUpdate about a week after the competition closes on January 4th.
  • 1st, 2nd and 3rd place – for the best overall gingerbread houses in the collection
  • The 'Sprinkles' Prize – for the best additions to the base model (the crazier, the better)
  • The 'Swirl' Prize – for the best use of Dynamic Components in the model
  • The 'Sweet-tooth' Prize – for the most creative use of a single candy ingredient in a model
If you're looking for inspiration, take a gander at what folks did with Santa's sleigh last year. Have fun, and happy holidays.

Net neutrality and the benefits of caching

(Cross-posted from the Google Public Policy Blog)

One of the first posts I wrote for this blog last summer tried to define what we at Google mean when we talk about the concept of net neutrality.

Broadband providers -- the on-ramps to the Internet -- should not be allowed to prioritize traffic based on the source, ownership or destination of the content. As I noted in that post, broadband providers should have the flexibility to employ network upgrades, such as edge caching. However, they shouldn't be able to leverage their unilateral control over consumers' broadband connections to hamper user choice, competition, and innovation. Our commitment to that principle of net neutrality remains as strong as ever.

Some critics have questioned whether improving Web performance through edge caching -- temporary storage of frequently accessed data on servers that are located close to end users -- violates the concept of network neutrality. As I said last summer, this myth -- which unfortunately underlies a confused story in Monday's Wall Street Journal -- is based on a misunderstanding of the way in which the open Internet works.

Edge caching is a common practice used by ISPs and application and content providers in order to improve the end user experience. Companies like Akamai, Limelight, and Amazon's Cloudfront provide local caching services, and broadband providers typically utilize caching as part of what are known as content distribution networks (CDNs). Google and many other Internet companies also deploy servers of their own around the world.

By bringing YouTube videos and other content physically closer to end users, site operators can improve page load times for videos and Web pages. In addition, these solutions help broadband providers by minimizing the need to send traffic outside of their networks and reducing congestion on the Internet's backbones. In fact, caching represents one type of innovative network practice encouraged by the open Internet.

Google has offered to "colocate" caching servers within broadband providers' own facilities; this reduces the provider's bandwidth costs since the same video wouldn't have to be transmitted multiple times. We've always said that broadband providers can engage in activities like colocation and caching, so long as they do so on a non-discriminatory basis.

All of Google's colocation agreements with ISPs -- which we've done through projects called OpenEdge and Google Global Cache -- are non-exclusive, meaning any other entity could employ similar arrangements. Also, none of them require (or encourage) that Google traffic be treated with higher priority than other traffic. In contrast, if broadband providers were to leverage their unilateral control over consumers' connections and offer colocation or caching services in an anti-competitive fashion, that would threaten the open Internet and the innovation it enables.

Despite the hyperbolic tone and confused claims in Monday's Journal story, I want to be perfectly clear about one thing: Google remains strongly committed to the principle of net neutrality, and we will continue to work with policymakers in the years ahead to keep the Internet free and open.

P.S.: The Journal story also quoted me as characterizing President-elect Obama's net neutrality policies as "much less specific than they were before." For what it's worth, I don't recall making such a comment, and it seems especially odd given that President-elect Obama's supportive stance on network neutrality hasn't changed at all.

Update: Larry Lessig, Save the Internet, Public Knowledge, David Isenberg, Wired and others all found fault with today's piece too.

@Twitter: Welcome to Google Friend Connect

We know many of you enjoy using Twitter to see what people are talking about and to let others know what you've been up to, whether it's sharing a YouTube video or checking in on your friend's tweets. To help you and your Twitter network stay connected no matter where you are on the web, we're excited to announce that Google Friend Connect has integrated with Twitter. This means that when you join a friend connected site, you can choose to use your Twitter profile, discover people you follow on Twitter who are also members of the site, and quickly tweet that you have found a cool website.



To send a tweet about a site you have joined, click the invite link in the members gadget, then click the Twitter icon on the share tab. The next time your followers sign in to Twitter, they'll see your tweet containing a link to the interesting site you've found.

This integration with Twitter is an example of how we want to continue improving Friend Connect, extending the open social web and bringing social features to more places on the web.

Announcing "Browser Security Handbook"



Many people view the task of writing secure web applications as a very complex challenge - in part because of the inherent shortcomings of technologies such as HTTP, HTML, or Javascript, and in part because of the subtle differences and unexpected interactions between various browser security mechanisms.

Through the years, we found that having a full understanding of browser-specific quirks is critical to making sound security design decisions in modern Web 2.0 applications. For example, the same user-supplied link may appear to one browser as a harmless relative address, while another could interpret it as a potentially malicious Javascript payload. In another case, an application may rely on a particular HTTP request that is impossible to spoof from within the browser in order to defend the security of its users. However, an attacker might easily subvert the safeguard by crafting the same request from within commonly installed browser extensions. If not accounted for, these differences can lead to trouble.

In hopes of helping to make the Web a safer place, we decided to release our Browser Security Handbook to the general public. This 60-page document provides a comprehensive comparison of a broad set of security features and characteristics in commonly used browsers, along with (hopefully) useful commentary and implementation tips for application developers who need to rely on these mechanisms, as well as engineering teams working on future browser-side security enhancements.

Please note that given the sheer number of characteristics covered, we expect some kinks in the initial version of the handbook; feedback from browser vendors and security researchers is greatly appreciated.

Native Client: A Technology for Running Native Code on the Web



Most native applications can access everything on your computer – including your files. This access means that you have to make decisions about which apps you trust enough to install, because a malicious or buggy application might harm your machine. Here at Google we believe you shouldn't have to choose between powerful applications and security. That's why we're working on Native Client, a technology that seeks to give Web developers the opportunity to make safer and more dynamic applications that can run on any OS and any browser. Today, we're sharing our technology with the research and security communities for their feedback to help make this technology more useful and more secure.

Our approach is built around a software containment system called the inner-sandbox that is designed to prevent unintended interactions between a native code module and the host system. The inner-sandbox uses static analysis to detect security defects in untrusted x86 code. Previously, such analysis has been challenging due to such practices as self-modifying code and overlapping instructions. In our work, we disallow such practices through a set of alignment and structural rules that, when observed, enable the native code module to be disassembled reliably and all reachable instructions to be identified during disassembly. With reliable disassembly as a tool, it's then feasible for the validator to determine whether the executable includes unsafe x86 instructions. For example, the validator can determine whether the executable includes instructions that directly invoke the operating system that could read or write files or subvert the containment system itself.

To learn more and help test Native Client, check out our post on the Google Code blog as well as our developer site. Our developer site includes our research paper and of course the source for the project under the BSD license.

We look forward to hearing what you think!

User Experience in the Identity Community

Eric Sachs & Ben Laurie, Google Security

One of the major conferences on Internet identity standards is the Internet Identity Workshop (IIW), a semiannual 'un-conference' where the sessions are not determined ahead of time. It is attended by a large set of people who work on Internet security and identity standards such as OAuth, OpenID, SAML, InfoCards, etc.  A major theme within the identity community this year has been about improving the user experience and growing the adoption of these technologies.  The OpenID community is making great progress on user experience, with Yahoo, AOL, and Google quickly improving the support they provide (read a summary from Joseph Smarr of Plaxo).  Similarly, the InfoCard community has been working on simplifying the user experience of InfoCard technology, including the updated CardSpace selector from Microsoft.

Another hot topic at IIW centered around how to improve the user experience when testing alternatives and enhancements to passwords to make them less susceptible to phishing attacks.  Many websites and enterprises have tried these password enhancements/alternatives, but they found that people complained that they were hard to use, or that they weren't portable enough for people who use multiple computers, including web cafes and smart phones.  We have published an article summarizing some of the community's current ideas for how to deploy these new authentication mechanisms using a multi-layered approach that minimizes additional work required by users.  We have also pulled together a set of videos showing how a number of these different approaches work with both web-based and desktop applications.  We hope this information will be helpful to other websites and enterprises who are concerned about phishing.

Gmail security and recent phishing activity



We've seen some speculation recently about a purported security vulnerability in Gmail and the theft of several website owners' domains by unauthorized third parties. At Google we're committed to providing secure products, and we mounted an immediate investigation. Our results indicate no evidence of a Gmail vulnerability.

With help from affected users, we determined that the cause was a phishing scheme, a common method used by malicious actors to trick people into sharing their sensitive information. Attackers sent customized e-mails encouraging web domain owners to visit fraudulent websites such as "google-hosts.com" that they set up purely to harvest usernames and passwords. These fake sites had no affiliation with Google, and the ones we've seen are now offline. Once attackers gained the user credentials, they were free to modify the affected accounts as they desired. In this case, the attacker set up mail filters specifically designed to forward messages from web domain providers.

Several news stories referenced a domain theft from December 2007 that was incorrectly linked to a Gmail CSRF vulnerability. We did have a Gmail CSRF bug reported to us in September 2007 that we fixed worldwide within 24 hours of private disclosure of the bug details. Neither this bug nor any other Gmail bug was involved in the December 2007 domain theft.

We recognize how many people depend on Gmail, and we strive to make it as secure as possible. At this time, we'd like to thank the wider security community for working with us to achieve this goal. We're always looking at new ways to enhance Gmail security. For example, we recently gave users the option to always run their entire session using https.

To keep your Google account secure online, we recommend you only ever enter your Gmail sign-in credentials to web addresses starting with https://www.google.com/accounts, and never click-through any warnings your browser may raise about certificates. For more information on how to stay safe from phishing attacks, see our blog post here.

OAuth for Secure Mashups



A year ago, a number of large and small websites announced a new open standard called OAuth. This standard is designed to provide a secure and privacy-preserving technique for enabling specific private data on one site to be accessed by another site. One popular reason for that type of cross-site access is data portability in areas such as personal health records (such as Google Health or Microsoft Healthvault), as well as social networks (such as OpenSocial enabled sites). I originally became involved in this space in the summer of 2005, when Google started developing a feature called AuthSub, which was one of the pre-cursors of OAuth. That was a proprietary protocol, but one that has been used by hundreds of websites to provide add-on services to Google Account users by getting permission from users to access data in their Google Accounts. In fact, that was the key feature that a few of us used to start the Google Health portability effort back when it was only a prototype project with a few dedicated Googlers.

However, with the development of a common Internet standard in OAuth, we see much greater potential for data portability and secure mash-ups. Today we announced that the gadget platform now supports OAuth, and the interoperability of this standard was demonstrated by new iGoogle gadgets that AOL and MySpace both built to enable users to see their respective AOL or MySpace mailboxes (and other information) while on iGoogle. However, to ensure the user's privacy, this only works after the user has authorized AOL or MySpace to make their data available to the gadget running on iGoogle. We also previously announced that third-party developers can build their own iGoogle gadgets that access the OAuth-enabled APIs for Google applications such as Calendar, Picasa, and Docs. In fact, since both the gadget platform and OAuth technology are open standards, we are working to help other companies who run services similar to iGoogle to enhance them with support for these standards. Once that is in place, these new OAuth-powered gadgets that are available on iGoogle will also work on those other sites, including many of the gadgets that Google offers for its own applications. This provides a platform for some interesting mash-ups. For example, a third-party developer could create a single gadget that uses OAuth to access both Google OAuth-enabled APIs (such as a Gmail user's address book) and MySpace OAuth-enabled APIs (such as a user's friend list) and display a mashup of the combination.

While the combination of OAuth with gadgets is an exciting new use of the technology, most of the use of OAuth is between websites, such as to enable a user of Google Health to allow a clinical trial matching site to access his or her health profile. I previously mentioned that one privacy control provided by OAuth is that it defines a standard way for users to authorize one website to make their data accessible to another website. In addition, OAuth provides a way to do this without the first site needing to reveal the identity of the user -- it simply provides a different opaque security token to each additional website the user wants to share his or her data with. It would allow a mutual fund, for example, to provide an iGoogle gadget to their customers that would run on iGoogle and show the user the value of his or her mutual fund, but without giving Google any unique information about the user, such as a social security number or account number. In the future, maybe we will even see industries like banks use standards such as OAuth to allow their customers to authorize utility companies to perform direct debit from the user's bank account without that person having to actually share his or her bank account number with the utility vendor.

The OAuth community is continuing to enhance this standard and is very interested in having more companies engaged with its development. The OAuth.net website has more details about the current standard, and I maintain a website with advanced information about Google's use of OAuth, including work on integrating OAuth with desktop apps, and integrating with federation standards such as OpenID and SAML. If you're interested in engaging with the OAuth community, please get in touch with us.

Malware? We don't need no stinking malware!



"This site may harm your computer"
You may have seen those words in Google search results — but what do they mean? If you click the search result link you get another warning page instead of the website you were expecting. But if the web page was your grandmother's baking blog, you're still confused. Surely your grandmother hasn't been secretly honing her l33t computer hacking skills at night school. Google must have made a mistake and your grandmother's web page is just fine...



I work with the team that helps put the warning in Google's search results, so let me try to explain. The good news is that your grandmother is still kind and loves turtles. She isn't trying to start a botnet or steal credit card numbers. The bad news is that her website or the server that it runs on probably has a security vulnerability, most likely from some out-of-date software. That vulnerability has been exploited and malicious code has been added to your grandmother's website. It's most likely an invisible script or iframe that pulls content from another website that tries to attack any computer that views the page. If the attack succeeds, then viruses, spyware, key loggers, botnets, and other nasty stuff will get installed.

If you see the warning on a site in Google's search results, it's a good idea to pay attention to it. Google has automatic scanners that are constantly looking for these sorts of web pages. I help build the scanners and continue to be surprised by how accurate they are. There is almost certainly something wrong with the website even if it is run by someone you trust. The automatic scanners make unbiased decisions based on the malicious content of the pages, not the reputation of the webmaster.

Servers are just like your home computer and need constant updating. There are lots of tools that make building a website easy, but each one adds some risk of being exploited. Even if you're diligent and keep all your website components updated, your web host may not be. They control your website's server and may not have installed the most recent OS patches. And it's not just innocent grandmothers that this happens to. There have been warnings on the websites of banks, sports teams, and corporate and government websites.

Uh-oh... I need help!
Now that we understand what the malware label means in search results, what do you do if you're a webmaster and Google's scanners have found malware on your site?

There are some resources to help clean things up. The Google Webmaster Central blog has some tips and a quick security checklist for webmasters. Stopbadware.org has great information, and their forums have a number of helpful and knowledgeable volunteers who may be able to help (sometimes I'm one of them). You can also use the Google SafeBrowsing diagnostics page for your site (http://www.google.com/safebrowsing/diagnostic?site=<site-name-here>) to see specific information about what Google's automatic scanners have found. If your site has been flagged, Google's Webmaster Tools lists some of the URLs that were scanned and found to be infected.

Once you've cleaned up your website, use Google's Webmaster Tools to request a malware review. The automatic systems will rescan your website and the warning will be removed if the malware is gone.

Advance warning
I often hear webmasters asking Google for advance warning before a malware label is put on their website. When the label is applied, Google usually emails the website owners and then posts a warning in Google's Webmaster Tools. But no warning is given ahead of time - before the label is applied - so a webmaster can't quickly clean up the site before a warning is applied.

But, look at the situation from the user's point of view. As a user, I'd be pretty annoyed if Google sent me to a site it knew was dangerous. Even a short delay would expose some users to that risk, and it doesn't seem justified. I know it's frustrating for a webmaster to see a malware label on their website. But, ultimately, protecting users against malware makes the internet a safer place and everyone benefits, both webmasters and users.

Google's Webmaster Tools has started a test to provide warnings to webmasters that their server software may be vulnerable. Responding to that warning and updating server software can prevent your website from being compromised with malware. The best way to avoid a malware label is to never have any malware on the site!

Reviews
You can request a review via Google's Webmaster Tools and you can see the status of the review there. If you think the review is taking too long, make sure to check the status. Finding all the malware on a site is difficult and the automated scanners are far more accurate than humans. The scanners may have found something you've missed and the review may have failed. If your site has a malware label, Google's Webmaster Tools will also list some sample URLs that have problems. This is not a full list of all of the problem URLs (because that's often very, very long), but it should get you started.

Finally, don't confuse a malware review with a request for reconsideration. If Google's automated scanners find malware on your website, the site will usually not be removed from search results. There is also a different process that removes spammy websites from Google search results. If that's happened and you disagree with Google, you should submit a reconsideration request. But if your site has a malware label, a reconsideration request won't do any good — for malware you need to file a malware review from the Overview page.



How long will a review take?
Webmasters are eager to have a Google malware label removed from their site and often ask how long a review of the site will take. Both the original scanning and the review process are fully automated. The systems analyze large portions of the internet, which is big place, so the review may not happen immediately. Ideally, the label will be removed within a few hours. At its longest, the process should take a day or so.

New spam and virus trends from Enterprise



The Google Apps Security & Compliance team, which provides email and web security for more than 40,000 companies, regularly tracks trends in spam, viruses, and other threats. Check out some of our latest findings over on the Enterprise blog. Also, on Friday, August 15, at 10:00 am PT, we'll be hosting a webinar on keeping your business safe from web and email threats -- tune in if you'd like to learn more.

Keyczar: Safe and Simple Cryptography



Cryptography is notoriously hard to get right and if improperly used, can create serious security holes. Common mistakes include using the wrong cipher modes or obsolete algorithms, composing primitives in an unsafe manner, hard-coding keys in source code, or failing to anticipate the need for future key rotation. With these risks in mind, we're pleased to announce the open-source release of Keyczar.

Keyczar is a cryptographic toolkit that supports encryption and authentication for both symmetric and public-key algorithms. It addresses some of the aforementioned issues by choosing safe defaults, tagging outputs with key version information, and providing a simple application programming interface. Keyczar's key versioning system makes it easy to rotate and revoke keys, without worrying about backward compatibility or making any changes to source code.

We look forward to working with the open source community and continuing to make cryptography safer and easier to use. To download Keyczar or for more information, please visit our Google Code project and discussion group.

Are you using the latest web browser?



In view of mass defacements of hundreds of thousand of web pages - with the intent to misuse them to launch drive-by download attacks - security researchers from ETH Zurich, Google, and IBM Internet Security Systems were interested in looking at the other side of the attack: the web browser. By analyzing the web browser versions seen in visits to Google websites, they have shown that more than 600 million Internet users don't use the latest version of their browser.

Slow migration to latest browser version
The researchers' paper, entitled "Understanding the Web Browser Threat", shows that as of June 2008, only 59.1% percent of Internet users worldwide use the latest major version of their preferred web browser. Firefox users are the most attentive: 92.2% of them surfed with Firefox 2, the latest major version before the recently released 3.0. Only 52.5% of Microsoft Internet Explorer users have updated to version 7, which is the most secure according to multiple publicly-cited Microsoft experts (among them Sandi Hardmeier). The study revealed that 637 million Internet users worldwide who use web browsers are either not running the latest version of their preferred browser or have not installed the latest patches. These users are vulnerable to exploitation due to their web browser's "built-in" vulnerabilities and the lack of more recent security mechanisms such as improved phishing protection.

Neglected security patches
Over the past 18 months, the study also shows, a maximum of 83.3% of Firefox users were using the latest major version of the web browser and also had all current patches installed (i.e. latest minor version). Only 56.1% and 47.6% of Opera and Internet Explorer users, respectively, were similarly utilizing fully-patched web browsers. Apple users are no better: since the public release of Safari 3, only 65.3% of users operate the latest Safari version.


Maximum measured share of users surfing the web with the most secure versions of Firefox, Safari, Opera and Internet Explorer in June 2008 as seen on Google websites.


Obsolete browser warning
The study's most important finding is that technical measures now in place do not sufficiently guarantee browser security, and that users' security awareness must be further developed. The problem is that most users are unaware that they are not using their browser's latest version. It must be made clear to web browser users that outdated software is associated with significantly higher risk. The researchers therefore suggest that, as a critical component of web software, a visible warning be instituted that warns the user of missing security patches in a way analogous to the 'best before' date in the perishable food industry. Software updates must also be made easier to find. The resulting transparency would go far in contributing to end user awareness of software weaknesses, and allow users to better evaluate risks.


Example "best before" implementation on a Web browser


As a side effect, having users migrate faster to the latest browser version would not only increase security but also make the lives of webmasters easier, as they would need to test and optimize websites for fewer older versions of web browsers.

Meet ratproxy, our passive web security assessment tool



We're happy to announce that we've just open-sourced ratproxy, a passive web application security assessment tool that we've been using internally at Google. This utility, developed by our information security engineering team, is designed to transparently analyze legitimate, browser-driven interactions with a tested web property and automatically pinpoint, annotate, and prioritize potential flaws or areas of concern.

The proxy analyzes problems such as cross-site script inclusion threats, insufficient cross-site request forgery defenses, caching issues, cross-site scripting candidates, potentially unsafe cross-domain code inclusion schemes and information leakage scenarios, and much more. (A more-detailed discussion of these features and information on securing vulnerable applications is provided here.) Compared with more-traditional active crawlers, or with fully manual request inspection and modification frameworks, this approach offers several significant advantages in terms of minimized overhead; marginalized risk of site disruptions; high coverage of complex, client-driven application states in web 2.0 solutions; and insight into dynamic cross-domain trust models.

We decided to make this tool freely available as open source because we feel it will be a valuable contribution to the information security community, helping advance the community's understanding of security challenges associated with contemporary web technologies. We believe that responsible security research brings a net overall benefit to the safety of the Web as a whole, and have released this tool explicitly to support that kind of research.

To download the proxy, please visit this page. Also, please keep in mind that the proxy is designed solely to highlight interesting patterns in web applications, and a further analysis by a security professional is often required to interpret the results and their significance for the tested platform.

Safe Browsing Diagnostic To The Rescue



We've been protecting Google users from malicious web pages since 2006 by showing warning labels in Google's search results and by publishing the data via the Safe Browsing API to client programs such as Firefox and Google Desktop Search. To create our data, we've built a large-scale infrastructure to automatically determine if web pages pose a risk to users. This system has proven to be highly accurate, but we've noted that it can sometimes be difficult for webmasters and users to verify our results, as attackers often use sophisticated obfuscation techniques or inject malicious payloads only under certain conditions. With that in mind, we've developed a Safe Browsing diagnostic page that will provide detailed information about our automatic investigations and findings.

The Safe Browsing diagnostic page of a site is structured into four different categories:

  1. What is the current listing status for [the site in question]?

    We display the current listing status of a site and also information on how often a site or parts of it were listed in the past.

  2. What happened when Google visited this site?

    This section includes information on when we analyzed the page, when it was last malicious, what kind of malware we encountered and so fourth.   To help web masters clean up their site, we also provide information about the sites that were serving malicious software to users and which sites might have served as intermediaries.

  3. Has this site acted as an intermediary resulting in further distribution of malware?

    Here we provide information if this site has facilitated the distribution of malicious software in the past. This could be an advertising network or statistics site that accidentally participated in the distribution of malicious software.

  4. Has this site hosted malware?

    Here we provide information if the the site has hosted malicious software in the past. We also provide information on the victim sites that initiated the distribution of malicious software.


All information we show is historical over the last ninety days but does not go further into the past.   Initially, we are making the Safe Browsing diagnostic page available in two ways.  We are adding a link on the interstitial page a user sees after clicking on a search result with a warning label, and also via an "additional information" link in Firefox 3's warning page. Of course, for anyone who wants to know more about how our detection system works, we also provide a detailed tech report [pdf] including an overview of the detection system and in-depth data analysis.

Contributing To Open Source Software Security



From operating systems to web browsers, open source software plays a critical role in the operation of the Internet. The security of open source software is therefore quite important, as it often interacts with personal information -- ranging from credit card numbers to medical records -- that needs to be kept safe. There has been a long-lived discussion on whether open source software is inherently more secure than closed source software. While popular opinion has begun to tilt in favor of openness, there are still arguments for both sides. Instead of diving into those treacherous waters (or giving weight to the idea of "inherent security"), I'd like to focus on the fruits of this extensive discussion. In particular, David A. Wheeler laid out a "bottom line" in his Secure Programming for Linux and Unix HOWTO which applies to both open and closed source software. It predicates real security in software on three actions:

  1. people need to actually review the code

  2. developers/reviewers need to know how to write secure code

  3. once found, security problems need to be fixed quickly, and their fixes distributed quickly


While distilling anything down to three steps makes it seem easy, this isn't necessarily the case. Given how important open source software is to Google, we've attempted to contribute to this bottom line. As Chris said before, our engineers are encouraged to contribute both software and time to open source efforts. We regularly submit the results of our automated and manual security analysis of open source software back to the community, including related software engineering time. In addition, our engineering teams frequently release software under open source licenses. This software was written either with security in mind, such as with security testing tools, or by engineers well-versed in the security challenges of their project.

These efforts leave one area completely unaddressed -- getting security problems fixed quickly, and then getting those fixes distributed quickly. It has been unclear how to best resolve this issue. There is no centralized security authority for open source projects, and operating system distribution publishers are the best bet for getting updates to the highest number of users. Even if users can get updates in this manner, how should a security researcher contact a particular project's author? If there's a potential, security-related issue, who can help evaluate the risk for a project? What resources are there for projects that have been compromised, but have no operational security background?

I'm proud to announce that Google has sponsored participation in oCERT, the open source computer emergency response team. oCERT is a volunteer workforce of security professionals from the open source community with the goal of providing security vulnerability mediation and incident response services to open source projects. It will strive to contact software authors with all security reports and aid in debugging and patching, especially in cases where the author, or the reporter, doesn't have a background in security. Reliable contacts for projects, publishers, and vendors will be maintained where possible and used for notification when issues arise and fixes are available for mediated issues. Additionally, oCERT will aid projects of any size with responses to security incidents, such as server compromises.

It is my hope that this initiative will not only aid in remediating security issues in a timely fashion, but also provide a means for additional security contributions to the open source community.

All Your iFrame Are Point to Us



It has been over a year and a half since we started to identify web pages that infect vulnerable hosts via drive-by downloads, i.e. web pages that attempt to exploit their visitors by installing and running malware automatically. During that time we have investigated billions of URLs and found more than three million unique URLs on over 180,000 web sites automatically installing malware. During the course of our research, we have investigated not only the prevalence of drive-by downloads but also how users are being exposed to malware and how it is being distributed. Our research paper is currently under peer review, but we are making a technical report [PDF] available now. Although our technical report contains a lot more detail, we present some high-level findings here:

Search Results Containing a URL Labeled as Harmful


The above graph shows the percentage of daily queries that contain at least one search result labeled as harmful. In the past few months, more than 1% of all search results contained at least one result that we believe to point to malicious content and the trend seems to be increasing.

Browsing Habits

Good computer hygiene, such as running automatic updates for the operating system and third-party applications, as well as installing anti-virus products goes a long way in protecting your home computer. However, we have been wondering if users' browsing habits impact the likelihood of encountering malicious web pages. To study this aspect, we took a sample of ~7 million URLs and mapped them to DMOZ categories. Although we found that adult web pages may increase the risk of exploitation, each DMOZ category was affected.

Malicious Content Injection

To understand if malicious content on a web server is due to poor web server security, we analyzed the version numbers reported by web servers on which we found malicious pages. Specifically, we looked at the Apache and the PHP versions exported as part of a server's response. We found that over 38% of both Apache and PHP versions were outdated increasing the risk of remote content injection to these servers.

Our "Ghost In the Browser [PDF]" paper highlighted third-party content as one potential vector of malicious content. Today, a lot of third-party content is due to advertising. To assess the extent to which advertising contributes to drive-by downloads, we analyze the distribution chain of malware, i.e. all the intermediary URLs a browser downloads before reaching a malware payload. We inspected each distribution chain for membership in about 2,000 known advertising networks. If any URL in the distribution chain corresponds to a known advertising network, we count the whole page as being infectious due to Ads. In our analysis, we found that on average 2% of malicious web sites were delivering malware via advertising. The underlying problem is that advertising space is often syndicated to other parties who are not known to the web site owner. Although non-syndicated advertising networks such as Google Adwords are not affected, any advertising networks practicing syndication needs to carefully study this problem. Our technical report [PDF] contains more detail including an analysis based on the popularity of web sites.

Structural Properties of Malware Distribution


Finally, we also investigated the structural properties of malware distribution sites. Some malware distribution sites had as many as 21,000 regular web sites pointing to them. We also found that the majority of malware was hosted on web servers located in China. Interestingly, Chinese malware distribution sites are mostly pointed to by Chinese web servers.

We hope that an analysis such as this will help us to better understand the malware problem in the future and allow us to protect users all over the Internet from malicious web sites as best as we can. One thing is clear - we have a lot of work ahead of us.