This Group Posed As Russian Trolls And Bought Political Ads On Google. It Was Easy.

Google says it's securing its ad platform against foreign meddlers, but for just $35 researchers posing as Russian trolls were able to run political ads without any hurdles.

In the summer of 2018, after months of public and legislator outcry over election interference, you might think it would be difficult for a Russian troll farm to purchase — with Russian currency, from a Russian zip code — racially and politically divisive ads through Google. And you might reasonably assume that if such a troll farm were able to do this, Google — which has said "no amount of interference that is acceptable" — would prevent it from successfully targeting those ads toward thousands of Americans on major news sites and YouTube channels.

But you’d be wrong.

Researchers from the advocacy group the Campaign for Accountability — which has frequently targeted Google with its “transparency project” investigations and has received funding from Google competitor Oracle — posed as Kremlin-linked trolls and successfully purchased divisive online ads using Google’s ad platform and targeted them toward Americans. In an attempt to trigger Google’s safeguards against such efforts, the researchers purchased the advertisements using the name and identifying details of the Internet Research Agency — a Kremlin-linked troll farm that’s been the subject of numerous congressional hearings. The advertisements appeared on the YouTube channels and websites of media brands like CNN, CBS This Morning, HuffPost, and the Daily Beast.

The Google ads units culled pictures from the original Russian-linked site, which looks like this:

The CFA then used language from the website to create the ads:

Despite assurances from Google last month that it has installed “robust systems” to “identify influence operations launched by foreign governments,” the company approved the CFA ads in less than 48 hours. The ads used language and images identical to that of Russia’s Internet Research Agency troll farm. The images had previously been identified by congressional investigators and major media outlets as part of the glut of Russian content used to sow political and racial discord during the 2016 presidential election in the United States. The organization also ran ads designed to direct users to sites identified by Congress as being run by Russian trolls.

All told, CFA spent just $35 on its test ads, which generated more than 20,000 impressions and some 200 click-throughs. Google never flagged them.

“I’m a little astounded something so flagrantly obvious could get through Google’s platform,” Daniel Stevens, the executive director for the Campaign for Accountability told BuzzFeed News. “This should've been caught and it wasn’t and that’s a problem.”

In a statement to BuzzFeed News, Google declined to address any oversights in its AdWords platform directly, noting vaguely that it has "taken further appropriate action to upgrade our systems and processes." (According to a Google source, the CFA fake troll accounts reported by BuzzFeed News up have been disabled by the company.) Instead, the company criticized the Campaign for Accountability and one of its reported backers — Oracle — for what it called "a stunt to impersonate Russian trolls." CFA declined to provide further information on its funding, noting, "This is only one minor project on which we work."

Here's Google's full statement:

We’ve built numerous controls, technical detection systems and a detailed mapping of foreign troll accounts. To date, largely because of this work, the abuse from foreign entities has been limited. Now that one of our US-based competitors is actively misrepresenting itself, as part of a stunt to impersonate Russian trolls, we have taken further appropriate action to upgrade our systems and processes. We’d encourage Oracle and its astroturf groups to work together with us to prevent real instances of foreign abuse — that’s how we work with other technology companies.

Reached for comment, Oracle's Senior Vice President Ken Glueck told BuzzFeed News it was unaware of CFA's research. “We have absolutely no idea what Google is talking about. This is the first we’ve heard of this. Wish we had a ruble for every time Google blamed their problems on us."

The experiment

To execute the ad buys, researchers from the Campaign for Accountability constructed fake online profiles explicitly designed to raise red flags inside Google’s AdWords program. Using burner phones purchased in South America, the organization created a Russian email account on the popular email client Yandex. Then, using a virtual private network, CFA changed its IP address to appear as if the account were based in St. Petersburg, the site of the Internet Research Agency.

Using the Russian IP address and email account, the organization created a Russian Google AdWords account. In an attempt to trigger Google’s safeguards, the organization registered the billing information to the Internet Research Agency, using what appeared to be the troll farm’s Russian taxpayer information, address, and other identifying information.

None of the information prompted scrutiny from Google, which approved the creation of the account. In fact, according to the researchers, once the information and text from the alleged Russian site were entered, modeled after the Russian troll sites BlackMattersUS and Blacktivist, Google’s AdWords platform recommended its own image of a black woman crying to boost engagement.

The ad pointed to a 30-second YouTube video that CFA created, which was comprised of Facebook memes from previously identified Kremlin-linked trolls. All the memes in the video were culled from a report by the House Intelligence Committee, which released the Russian ads. None of the content triggered scrutiny from YouTube.

The costs for the ads — which were targeted using the keywords “African American,” “politics,” and “scandals & investigations” (Google alleges that it does not allow users to target based on race; however, you can target content based on keywords like "African American") — were minor. According to CFA, the ad campaign, which cost just over $6, drove 5,787 impressions and 56 clicks to the YouTube video with the Russian troll memes. The ad also appeared on YouTube channels for CBS This Morning, CNN, HuffPost, and Essence.

“Running this campaign wasn’t rocket science,” Stevens said. “I’m not sure my grandmother could do it but anyone who wants to run ads could — any even remotely competent social media professional could do this in a heartbeat.”

CFA created two more campaigns that linked troll sites with similar success. The first AdWords buy directed users toward USAReally.com, a site that McClatchy recently reported was believed to be linked to Russian operatives. The campaign brought nearly 4,000 impressions, appearing on sites including AnnCoulter.com, the Young Turks, and the Daily Beast. CFA also ran an ad campaign directing users to the IRA’s BlackMattersUS site. Targeting with keywords like “brown matters” and “know your rights,” the researchers generated nearly 11,000 impressions.

Google’s safeguards fall short

Google’s inability to spot CFA's flagrant attempts to sow political discord via its massive ad platform stands in contrast with other recent successful efforts by social media platforms (including Google's work with FireEye in Iran) to stop interference. In recent weeks, Facebook flagged and pulled hundreds of pages and accounts originating from Russia and Iran for what it called “coordinated inauthentic behavior.” Twitter — and eventually YouTube — subsequently removed similar accounts as well.

Google, meanwhile, has publicly touted its abilities to safeguard its platforms from outside actors. In late August, it announced it had thwarted phishing emails from state actors and cracked down on nefarious activity coming from Iran. Additionally, the company has worked with its internal Jigsaw team to create a “Protect Your Election” hub designed to give journalists, candidates, campaigns, and others the tools to help prevent and spot “digital attacks.” And in May, the company announced new election ads rules and “greater advertising transparency” for political ads.

The political ads safeguard against foreign actors appeared to work — on June 28, CFA researchers attempted to run two overtly political ads using images of lawmakers like Ted Cruz. The campaigns were disapproved by Google for “misrepresentation.”

But the CFA’s latest experiment appears to illustrate that when it comes to less overly political ads — the kind historically used by outside influence campaigns to sow political discord and racial and cultural tension — Google and its algorithms fall short. As the campaigns demonstrate, AdWords was unable to recognize previously reported state-sponsored content and websites. And, in some instances, it even assisted researchers in creating more effective ads to direct users to troll content.

“Google has admitted it’s trying to stop this activity when it comes to issue ads but it's clear there’s a huge gap in their policing of this content,” Stevens told BuzzFeed News. “It feels like a flagrant abdication of responsibility and it’s in-line with the trend we see from Google — they're very hesitant to crack down on things that are a threat to their business model.”

The security of Big Tech’s ad platforms will likely be the subject of testimony on Capitol Hill on Wednesday. But don’t expect much explanation from Google — though Facebook’s Sheryl Sandberg and Twitter's Jack Dorsey will appear to testify before the Senate Intelligence Committee, Google cofounder Larry Page has yet to accept the invitation to testify. But in a written statement to the Senate Select Committee on Intelligence, released just hours before the hearing, Google executives note they're continuing "to work to identify and remove actors from our products who mislead others regarding their identity, including the Internet Research Agency and other Russian- and Iranian-affiliated entities."

Topics in this article

Skip to footer