In June 2007, Privacy International, a U.K.-based privacy rights watch-dog, cited Google as the worst privacy offender among 23 online companies, ranking the “Don’t Be Evil” people below Microsoft, Apple, Amazon, eBay, LinkedIn, Facebook and AOL. According to the report, no other company was “coming close to achieving [Google’s] status as an endemic threat to privacy.” What most disturbed the authors was Google’s “increasing ability to deep-drill into the minutiae of a user’s life and lifestyle choices.” The result: “the most onerous privacy environment on the Internet.” Indeed, Google now controls an estimated 70 percent of the online search engine market, but its deep-drilling of user information — where we surf, whom we e-mail, what blogs we post, what pictures we share, what maps we look at, what news we read — extends far beyond the search feature to encompass the kind of “total information awareness” that privacy activists feared at the hands of the Bush Jr. administration’s much-maligned Total Information Awareness program.

Kevin Bankston, a privacy expert and attorney at the Electronic Frontier Foundation, a nonprofit advocacy group engaged in questions of privacy, free speech, and intellectual property in the digital age, warns of the possibilities. “In all of human history,” he says, “few if any single entities, other than the National Security Agency, have ever possessed such a hoard of sensitive data about so many people.” This is the sort of thing that should make the intelligence agencies, says Bankston, “drool with anticipation.” And drooling they are. Stephen Arnold, an IT expert who formerly worked at the defense and intelligence contractor Booz Allen Hamilton Inc. and who once consulted for Google, addressed this in a speech before a conference of current and former intelligence officials in Washington, D.C., in January 2006. According to an audio recording in our possession, he reported Google was increasingly sought out by the U.S. intelligence services because click-stream data — and everything else Google archives — “is a tremendous opportunity for the intelligence community.” Google, he said, “has figured out everything there is to know about data-collection.” The relationship with the government had become intimate enough, Arnold said, that at least three officers from “an unnamed intelligence agency” had been posted at Google’s headquarters in Mountain View, Calif. What they are doing there, Arnold did not reveal.

“We don’t comment on rumor or speculation,” said Google spokesperson Christine Chen. When asked separately how many former intelligence agency officials work at Google, she responded, “We don’t release personnel information.”

The conference, under the aegis of the Open Source Solutions Network, was hosted and organized by Robert David Steele, a former Central Intelligence Agency officer who left the agency 20 years ago and is now the founder and CEO of Open Source Solutions Network Inc., otherwise known as OSS.Net, an educational corporation that has worked with more than 50 governments to “advance the use of open source intelligence.” Steele considered Arnold’s item to be a bombshell. U.S. intel was now seated in the heart of the “Googleplex,” learning all it could from the masters in the private sector. Among Google’s critics, Steele who, since leaving the CIA, has spent 20 years promoting the digital commons, is about as fierce as they come. “Google would have been an absolutely precious gift to humanity,” he says. “But Google is positioning itself to take over the digital commons. I personally have resolved that unless Google comes clean with the public, the company is now evil.” The question today is whether Google, in fact, will be forced to change its ways — and whether Congress and the intelligence agencies want it to.

Google’s powers of data-collection depend on consumer choice — how much of your computing you put in Google’s hands. The more you choose Google applications, the more Google can know about you. At the extreme end of the spectrum, your every move can be tracked by some feature of Google. When you use the Google search box, as tens of millions of people do daily (with Google handling roughly 11,000 searches per second), the company can track all your search queries and the websites you visit as a result of those queries. If you use Google toolbar, the company can watch the amount of time you surf a website — the three minutes or three hours you spend on every page of that website. With Google’s acquisition of YouTube in 2006, viewing habits can be tracked. Google’s FriendConnect and Orkut archive your social networks. Google News, Books, Feedburner or Blogger log your reading habits. The writing you produce is stored on Google Docs, and your purchase habits and credit card numbers are captured by Google Checkout. Also gathered are voiceprint and call habits, through Google Voice; travel interests, patterns and place associations, through Google Maps, Google Earth and Google StreetView; medical conditions, medical history and prescription drug use, through Google Health; photos of friends and family, through Google’s Picasa images; and general activities, through Google Calendar. Then, there’s Google Desktop, which, at one point, offered what appeared to be an innocuous feature called “Search Across Computers.” This allowed Google to scan your computer to archive copies of text documents. In other words, just about everything on your PC — love letters, tax returns, business records, bad poetry — was duplicated on a remote Google server. (This function was discontinued on all platforms in January of this year.)

Taken alone, the Google search box is an exquisitely intimate repository of user information. “People treat the search box like their most trusted advisors,” says Kevin Bankston, the Electronic Frontier Foundation (EFF) attorney. “They tell the Google search box what they wouldn’t tell their own mother, spouse, shrink or priest.” Think about your most recent queries, say, about your “anal warts” or “inability to love in marriage,” or “self-hatred,” or your interest in the mechanics of “making a pipe bomb.” The search box is as good a place as any to understand how the Googleplex keeps tabs on its users. When you do a search, “cookies” installed on your computer record your IP address (a series of unique numbers that may be used to identify your computer), so Google can, in many contexts, identify a user. And it can do so with any of its applications.

All this, one would think, ceases once your PC is shut down and you leave home. However, Google released a “geolocation” application in 2008, Gears Geolocation API, that can “obtain the user’s current position,” “watch the user’s position as it changes over time,” and “quickly and cheaply obtain the user’s last known position.” According to a Google tech blog, the Gears application “can determine your location using nearby cell-towers or GPS for your mobile device or your computer’s IP address for your laptop.” A 2006 Technology Review article reports that Google’s director of research, Peter Norvig, even proposed the use of built-in microphones on PCs to identify television shows playing in the room, in order to display related advertising. Such data, it seems, could be processed as an audio fingerprint, which might aid in geolocation and profiling of users. (“Google had no plans to develop this,” Google spokesperson Christine Chen responded by e-mail. “And we haven’t.”)

Google’s data-mining interests go even deeper, to the core of our physical and mental being. Google co-founder Sergey Brin and his biotech specialist wife, Anne Wojcicki, according toThe Economist, have “brainstormed” with at least one prominent human genome researcher and approach genetics as a “database and computing problem.” This would tie in nicely with Google Health, launched in 2008 to take advantage of the growing trend of storing health records online, for easier access among diverse health care providers. Google has invested $3.9 million in Wojcicki’s biotech firm, 23andMe, whose “mission is to be the world’s trusted source of personal genetic information,” and which offers a basket of genetic tests to allow its customers to uncover ancestry, disease risks, and drug responses. Speaking before a Google “Zeitgeist” conference in 2008, Brin revealed that he carried a Parkinson’s gene and then advocated the recording of individual genetic codes to enhance health maintenance and medical research. Taken to its logical conclusion, this suggests the prospect of your body’s blueprint registered with an eventual “Google Genome,” perhaps with the help of the databases gathered at 23andMe. To drill further into the mind, Google has teamed up with marketing giant WPP to fund $4.6 million for research into online advertising, including one grant in the emerging field of “neuromarketing”: tracking everything from online navigation behavior to biofeedback metrics like heart rate, eye movement and brain wave activity in response to advertising stimuli. Google’s Chen points out that the results of this research will be available to industry as a whole and that “Google has no special right over, nor plans to use, any of the research funded by these grants.”

From Google’s standpoint, marketing — not surveillance — is the purpose of the informational harvest, as advertising generates most of Google’s $23 billion in annual revenue. The company is driving the evolution of the behavioral advertising model: more personal information gathered on consumers means more effectively targeted ads, thus higher ad rates and profits. (Gmail users often note how advertising, directly related to the subject matter of recently sent mails or searches, pops up on their browsers.) The unsurprising offshoot of the behavioral advertising model is political advertising, a new market being pursued by Google’s Elections and Issue Advocacy Team in Washington, D.C. Campaigning online has become as important as dominating the broadcast networks for candidates and advocacy groups, and this will require broader profiling of political behavior — an area of compelling interest for the intelligence agencies.

EFF attorney Bankston had his own personal run-in with the company in 2007. He was walking outside his office on 19th Street in San Francisco, when one of Google’s StreetView photography crews — who gather surreptitious pictures of practically every street in the world — caught him smoking a cigarette. Wired magazine editor Kevin Poulsen tracked down Bankston at a Silicon Alley party, and pulled out his laptop. “Take a look at this,” said Poulsen. Bankston was not pleased. “At the time,” he says, “I had represented to my family and other people that I had cut down on my smoking and even stopped smoking.” When Bankston emailed Google requesting to have his face blurred, the spokesperson from the legal department told him he needed to fax in his driver’s license and a sworn statement to prove his identity. “I had to give up my privacy,” says Bankston, “to protect my privacy.” Finally, after a week of prodding — and a pair of articles in Wired — Google removed the photos that showed Bankston puffing away on 19th Street. In the summer of 2008, Google instituted face-blurring for all its StreetView shots.

But what most concerned Bankston as a privacy lawyer was that he had no clear legal protections against Google. “In legal terms, Google is in the Wild West,” says Bankston. “The law hasn’t kept up.” The reason for the lag is due to the revolution in how Google stores information.

Google has developed into a software provider rivaling Microsoft, with this major distinction: almost all Google software is server-side, residing on massive Google computer banks, not your local PC, which means they own the content, not you. This is the paradigm shift of “cloud computing,” and the atmospheric analogy is apt. Information evaporates from your desktop or laptop and condenses in the “cloud,” held there for whenever and wherever the user wants it to rain down. For the customer, the advantages over desktop computing are appealing. You get software that is mostly free or relatively cheap; automatic upgrades; data backed up on redundant remote servers, thus crash proof (unless, of course, Google crashes); accessibility from any computer or wireless device; and there’s less strain on your desktop or laptop as most of the computation is handled by remote processors and data drives. Greasing the transition to the cloud, a new wave of inexpensive hardware — compact Netbooks specifically designed to configure with the cloud — is capturing a growing share of the PC market. Most ship with Windows, but, in a direct challenge to Microsoft, Google has announced the development of its own operating system, Chrome, to work more efficiently with its Chrome browser, both optimized for connecting online.

But one of the big problems with the cloud, and the danger it presents, is that the Fourth Amendment’s protections against search and seizure do not apply. The caveats are buried deep in the text that users usually skip over, and click “I agree,” to install a new application. But the consequences are huge, says Bankston. “When private data is held by a third party like Google, the Supreme Court has ruled that you ‘assume the risk’ of disclosure of that data.” When you store e-mail at Gmail — or, similarly, in the cloud at Yahoo or Hotmail — “you lose your constitutional protections immediately.” To search and seize the information on your desktop, a law enforcement or intelligence agency requires a warrant or grand jury subpoena, after demonstrating probable cause before a judge or magistrate; or an order from the Foreign Intelligence Surveillance Court (authorized by FISA); or a National Security Letter issued by the FBI, Department of Defense or CIA. But to obtain that same information stored on Google’s servers, there is a shortcut: Google, like a telecom provider, may supply the information voluntarily as long as the government can argue the information is needed as part of an “emergency.”

“Your data is less legally protected in the cloud,” says Bankston. “That’s a big issue when you have companies like Google that are soliciting more and more data into the cloud.” Take, for example, those records cached at Google Health: The Health Insurance Portability and Accountability Act protects the privacy of medical records stored with health care providers and insurance companies — but the law does not apply to the privacy of those records stored with third parties. Or, take Bankston’s smoking episode: Bankston had no recourse against public exposure. Google removed the photos simply as a sop to an outspoken privacy activist who made an outcry in the media.

As for search queries, we have no idea how the law applies. Presumably, the stipulations of the Electronic Communications Privacy Act of 1986 would come into play. The ECPA mandates different standards for privacy of information stored with a third party, depending on how old the information is. “But this has never been litigated,” says Bankston. “I think it’s very important for Congress to amend this 23-year-old law, so that’s it clear how it applies. To the extent that the law is uncertain, it benefits the government.”

Certainly, the government has a variety of means for getting at Google’s data, but, again, this is shrouded in the unknown: the processing of national security letters, for example, is entirely conducted in secret, with gag orders on all parties involved. In other words, the determinant of your privacy is what Google and the government decide behind closed doors. “The threat is real that the government is accessing more Google information than it should,” says Bankston. The company’s privacy policy is not exactly reassuring: “In some cases, we may process personal information on behalf of and according to the instructions of a third party, such as our advertising partners … We restrict access to personal information to Google employees, contractors and agents who need to know that information in order to operate, develop or improve our services.”

Google’s links with the intelligence agency may stretch back to 2004. In 1999, the CIA founded an IT venture capital firm called In-Q-Tel to research and invest in new digital technologies focused on intelligence gathering. An In-Q-Tel-funded company, Keyhole, Inc., developed the satellite mapping technology that would be acquired in 2004 to become Google Earth. In-Q-Tel’s former director of technology assessment, Rob Painter, joined Google as a senior manager of Google Federal, his focus the “evangelizing and implementing [of] Google Enterprise solutions for a host of users across the Intelligence and Defense Communities.”

In turn, Google has sold versions of its technology, especially Google Earth, to many U.S. agencies, including the U.S. Coast Guard, National Oceanographic and Atmospheric Administration, National Highway Traffic Safety Administration, the state of Alabama, and Washington, D.C. For the CIA, Google provided servers to support Intellipedia, a Wikipedia-like intranet for sharing intelligence. For the NSA, it supplied four “search appliances” and a maintenance contract, according to a FOIA investigation by the San Francisco Chronicle in 2008. (When asked about whether Google had supplied any other products or services to the National Security Agency or other intelligence agencies, Google’s Christine Chen wrote by email, “We don’t comment on any discussions we may or may not have had with any national intelligence agency.”)

According to Christopher Soghoian, a former CNet blogger and a doctoral candidate studying privacy and computing at the University of Indiana who has researched Google, the intelligence services would be particularly interested in Google’s “backdoor” programs for surveillance. Soghoian notes that Google applications launch without telling users that the processing and data storage is conducted on remote servers, as long as an Internet connection is maintained — easy enough, given the ubiquity of wireless broadband. Even with no connection, software such as Google’s Gears enable “offline” access to the cloud, running applications and storing data on a PC (again, no cost, no fuss) until a connection is re-established and the new data can be uploaded to Google. Thus the naive user transmits information to a third-party unwittingly — a modus operandi close to the definition of covert surveillance.

Soghoian notes that Google likely receives thousands of subpoenas and warrants every year from law enforcement and government agencies demanding information (AOL gets approximately 1,000 requests a month related to civil and criminal cases), and it has hired former DOJ officials and U.S. intelligence officers as corporate legal compliance officers handling the traffic. “The government gets somebody on the other end of the line who’s from the intelligence or law enforcement community,” says Soghoian, “who knows how they work, and maybe is sympathetic to their cause. Google doesn’t put former ACLU lawyers in charge of its compliance team.” According to Google’s Chen, such numbers are not publicly available. “Obviously, we follow the law like any other company,” she says. “When we receive a subpoena or court order, we check to see if it meets both the letter and the spirit of the law before complying. And if it doesn’t, we can object or ask that the request is narrowed.” She points out that, in 2006, Google went to court to fight a Department of Justice subpoena for millions of search queries on the grounds that it invaded user privacy. The judge ruled in Google’s favor.

Soghoian, however, suggests a perverse incentive for cooperation: by law, Google and the telecoms must be compensated for their time and effort. Thus, the feeding of information to spooks and cops can become a profitable enterprise.

Google also works with some of the top players in the surveillance industry, notably Lockheed Martin and SRA International. SRA is listed as a Google “enterprise partner” — more than a hundred such partners are listed on the Google website. Both companies, Lockheed and SRA, have engineered and sold data-mining software to the intelligence agencies. SRA’s NetOwl program, for example, has been described by a blogger at Pennsylvania State University, who watched the application in action at a corporate recruiter forum, as “searching all kinds of documents using Google for a certain person.” In response to our inquiries for further information on these programs and how they might have been developed in cooperation with Google, a Lockheed Martin spokesperson told us, “The work we do with Google is exclusively related to their Google Earth system.” SRA International’s vice president for public affairs, Sheila Blackwell, states, “We don’t discuss the specifics of our intelligence clients’ business.”

Former CIA officer Robert Steele says that the CIA’s Office of Research and Development had, at one point, provided funding for Google. According to its literature, ORD has a charter to push beyond the state of the art, developing and applying technologies and equipment more advanced than anything commercially available, including communications, sensors, semi-conductors, high-speed computing, artificial intelligence, image recognition and database management. Steele says that Google’s liaison at the ORD is Dr. Rick Steinheiser, a counterterrorism data-mining expert and a long-time CIA analyst. (No CIA response about Steinheiser’s work was forthcoming.)

Then, there are the intelligence officials allegedly working at Google’s Mountain View headquarters. When tech guru Stephen Arnold first revealed this information in the 2006 OSS conference. Anthony Kimery, a veteran intelligence reporter at Homeland Security Today, followed up with a report alleging a “secret relationship” between Google and U.S. intelligence. Google was “cooperat[ing] with U.S. intelligence agencies to provide national and homeland security-related user information from its vast databases,” with the intelligence agencies “working to ‘leverage Google’s [user] data monitoring’ capability as part of an effort to glean from this data information of ‘national security intelligence interest’ in the war on terror.” In other words, Google’s databases — or, some targeted portion — may have been dumped straight into the maw of U.S. intelligence agencies.

Like the giants of the surveillance-industrial complex, Google has backed its federal sales force in Reston, Virginia, with a D.C. lobbying operation — spending $2.9 million on lobbying in 2009 — to make sure that privacy is not a priority in the Obama administration. It also works with several industry-supported interest groups: the Interactive Advertising Bureau, the Technology Policy Institute, and the Progress & Freedom Foundation, whose mission statement espouses “an appreciation for the positive impacts of technology with a classically conservative view of the proper role of government… Those opportunities can only be realized if governments resist the temptation to regulate, tax and control.” All these groups are funded by Google, along with a who’s-who of communications behemoths. Their mission: subvert any congressional legislation extending Fourth Amendment-style prohibitions to the data-mining private sector. Their argument, per the Technology Policy Institute: “More privacy … would mean less information, less valuable advertising, and thus fewer resources available for producing new low-priced services” — in other words, privacy is a threat to the economy.

Google has also managed to install favorites in the White House. Andrew McLaughlin, formerly chief of Google’s Global Public Policy and Government Affairs division while also serving as assistant treasurer for Google’s NetPAC lobby, has been appointed as Obama’s deputy chief technology officer for Internet policy, despite protests from privacy advocates. Vivek Kundra, now posted as the Obama administration’s chief information officer at the Office of Management and Budget, formerly served as the chief technology officer for the city of Washington, D.C., where he ditched the use of Microsoft programs for municipal operations in favor of Google products. Concerns were heightened last spring by an administration initiative, proposed in Senate Bill 773, to grant the executive branch authority to disconnect and assume some measure of control over private networks in a declared “cybersecurity emergency.” That could be a quarantine operation to isolate and defeat a viral attack. It could also be an excuse for censorship of certain sites — or, for the cybersecurity agencies to data-mine where they have been hitherto forbidden. Google could be declared “critical infrastructure” in such an emergency, and its management temporarily assumed by federally certified “cybersecurity professionals,” as defined in S.773. It’s not wholly unfeasible that Google’s massive and much coveted behavioral profiles could then be fed into the NSA’s computers. And even without S.773, a long accumulation of executive orders over three decades has likely laid the groundwork for executive authority to take over critical communications networks in a “national emergency.”

But long before such an emergency comes to pass, if ever, the government and the regiments of data-mining companies it contracts with are seeing eye to eye. The commercial surveillance complex and the security surveillance complex have many common interests and methods: the ad gurus’ neuromarketing research complemented by the intel agencies’ longstanding research into mind control, from the CIA’s MK-ULTRA to the NSA’s current “cognitive neuroscience research”; the profiling of political behavior for campaign advertising complemented by the DHS’s elastic definitions of dissidents and “potential terrorists.”

Google is now anonymizing IP addresses from search logs after nine months, down from its previous eighteen-month retention policy. Company spokesperson Chen states, “We’re committed to using data both to improve our services and our security measures for our users and to protect their privacy, and we remain convinced that our current logs retention policy represents a responsible balance.” This is in contrast to Microsoft, which after six months throws out the search query data altogether. “Remember that totally anonymized search queries can be linked together to build an identity,” says Bankston. “Why does Google need to store our data perpetually? They’re very vague about it.”

Indeed, Google could, without violating the law, reveal a lot more about how it cooperates with the intelligence agencies — how many requests for information it receives, from what government entities, how many it complies with. “They could talk about all this, but they don’t,” says Bankston. “Google may not care a lot about your privacy, but they care a whole helluva lot about your perception of your privacy. To remind people of the risk of government access to your data is anathema.”

Research support for this article was provided by The Investigative Fund at The Nation Institute, now known as Type Investigations.