If the 2016 election was proof-of-concept for platform-enabled election meddling, the 2018 midterms, just months away, are shaping up to be more of a large-scale clinical trial — and an absolute nightmare for Facebook.
For Facebook, which is still reeling from manipulation of its platform by fake news purveyors and Kremlin-linked trolls seeking to disrupt US politics, November's midterms are a chance to show ornery regulators and an increasingly distrustful public that it can effectively shut down malicious activity within its own platform. But they're a daunting challenge as well, one complicated by fast-evolving tools of misinformation, bad actors incentivized by Russian success, and a ferociously politicized environment — all contributing to a broad and ongoing pollution of the public discourse.
Despite Facebook's repeated reassurances, some on Capitol Hill fear the company — and country — may be sucker punched again come November. And a dysfunctional federal government waffling on plans to thwart future attacks on elections isn’t helping matters.
“Just as we were naive to not see a Russia attack coming last election, we would be just as naive if we expected only an attack from Russia this election,” Rep. Eric Swalwell, a member of the House Intelligence Committee, told BuzzFeed News. “There are other countries who are our adversaries who have similar capabilities who see our unwillingness to defend the democracy as blood in the water.”
Swalwell, whose attempt to create an independent commission to study Russia’s election manipulation did not go anywhere in the House, told BuzzFeed News that Facebook can’t easily anticipate all threats it’s facing without close cooperation from the federal government.
“It is hypocritical for me or anyone in government to even begin to suggest that I know what Facebook should do if we can’t get our act together and do something ourselves,” Swalwell said. He bemoaned National Security Agency director Adm. Mike Rogers’ recent declaration that he has yet to receive authority from the White House to disrupt Russian cyber attacks. “If we don’t change the dynamic here, this is going to continue and 2016 won’t be viewed as something isolated,” Rogers told Congress last week.
Across the political spectrum, there’s worry that an ineffective national security apparatus is putting Facebook in position to fail again. “There’s expectations that Facebook and the other platforms are supposed to solve problems that the government and national security apparatus missed and thus far haven’t engaged,” Zac Moffatt, digital director of Mitt Romney's 2012 presidential campaign, told BuzzFeed News. “Without the national apparatus, how could the platforms be successful?”
Meanwhile, concerns over the future weaponization of Facebook’s ad platform as a tool of micro-targeted voter suppression are growing. Jess Bahr, an advertising professional who has run local political advertising campaigns reaching millions of US voters, told BuzzFeed News that Facebook's ability to identify and deliver advertising to very specific audiences based on political affinity and Congressional district could easily be misused to suppress voting in battleground states. “I would be shocked if didn’t happen,” she said. “Stopping the advertisement is only part of it. As soon as you put that information out there for people to share, Facebook loses control of it.”
Bryan DeSena, head of social at Saatchi & Saatchi, told BuzzFeed News that what we’re experiencing is already something of a worst case scenario. “It’s a terrible feeling to know that the industry you work in has laid the groundwork for this sort of messy spiral to begin,” he said. “I am not sure the platforms alone are responsible for stopping it. It’s a much larger issue, one that needs to impact our educational system, net neutrality legislation and other civic debates.”
Malicious foreign actors aren’t the only ones looking to exploit Facebook — mainstream politicians are looking to do it too by creating made-for-Facebook “news” sites that traffic in partisan and sometimes sensationalist messaging geared to spread on Facebook and other social platforms. In the months since the 2016 election, the Republican Governors Association, California Rep. Devin Nunes, ex-Hillary Clinton aide Peter Daou, and other politicians have sponsored these sites. Nunes’ site, the California Republican, is filled with headlines like “CNN busted for peddling fake news AGAIN!” and “New York Times is doing a GREAT job as Communism’s salesman.” It greets visitors with a pop-up asking them to like it on Facebook before they enter and features a “Like us at FB” banner that’s approximately twice the size of its own logo. The site’s only visible connection to Nunes is a tiny disclosure in its footer.
The Free Telegraph, a site sponsored by the RGA, also features prominent Facebook branding on its website and publishes native video to the social platform as well. Its Facebook page, followed by 31,000, promises to take readers “beyond the Beltway.” A recent quiz on the site poked at New Jersey’s wealthy Democrat governor. “Which one of Phil Murphy's Houses Are You?” it asked. There is no mention of the RGA link in its Facebook page About section. The Free Telegraph does disclose the connection at the bottom of its articles.
Democrats have tried their hand at creating their own news sites too. Last fall, Hillary Clinton endorsed Verrit, a made-for-social “news” site created by former aide Peter Daou. Verrit went quiet last month, promising to "reboot" in summer 2018.
The California Republican, Free Telegraph, Verrit, and other sites like them might not be peddling misinformation, but by using Facebook as an accelerant for partisan, inherently biased political commentary and passing it off as “news” they are further muddying some already turbid waters. More troubling is the idea that mainstream politicians appear to be adopting tactics known to have sowed so much political discord during the 2016 election. “You can draw a direct line from the practices of fake news sites to the Republican Governors [Association's] strategy with these sites,” one angry senior democratic operative told BuzzFeed News.
RGA communications director Jon Thompson disputed the notion that the Free Telegraph takes inspiration from the sensationalized news sites that have flourished on Facebook. “It's a project designed to help share information about the about how Republican governors are leading and getting results for their states,” he said.
With Facebook still struggling to contain misinformation via written stories, artificial intelligence breakthroughs are threatening to make 2016’s fake news epidemic look relatively benign, creating a new opportunity for bad actors to exploit new formats. The breakthroughs are in video and audio, where new, accessible technology is on the cusp of giving people the tools to create realistic fake videos of people saying or doing just about anything. Already, faked videos of Barack Obama are making the rounds. And though they may not factor prominently in the 2018 midterms, it’s not hard to see around the corner, where these videos could give Facebook and other platforms a new, more devastating category of fake news to grapple with.
“This is one of the greatest threats we are about to face as a society,” Moffatt said. “It’s something we’re wholly unprepared for.”
Platform-propagated conspiracy theories will also factor in the 2018 midterms. Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism, told BuzzFeed News that he was particularly struck by the recent spread of conspiracy theories claiming that the Parkland High School shooting survivors were crisis actors. “The success that the conspiracy theorists had in pushing the debate away from gun control and onto crisis actors was unbelievable,” he said. Parkland, Albright said, “was a warm up,” for the 2018 midterms.
Facebook has promised fixes to the vulnerabilities it knows about. It will soon publicly display all the ads that run on its service, bringing so-called dark ads into the light. It’s forcing political advertisers to be more transparent, creating special labeling to make them stand out in News Feed. It's also in the process of adding 10,000 people, including reviewers and security experts, to deal with these threats. And, according to a Facebook spokesperson, it's working to proactively block the spread of fake news and creation of fake accounts earlier in the process.
“Facebook fully understands — and takes seriously — our role and responsibility when it comes to helping protect elections integrity on our platform," Samidh Chakrabarti, Facebook's product manager for elections integrity and civic engagement, told BuzzFeed News in an emailed statement. "People around the world use Facebook to make real, meaningful connections, and we have to do all we can to prevent abuse and misuse of our services. We give this considerable thought, learning lessons from the past while looking for new threats. We are making significant investments, both in products and in people, to further develop a comprehensive approach to tackle these important issues.”
Still, the company’s thus far uneven response to the 2016 crisis calls into question its ability to effectively mitigate the next one. Facebook has promised to tackle fake news for years, for instance, yet it’s still spreading on its platform. And despite the company’s declarations that it’s taking the problem seriously, there are still signs of denial inside its ranks. Last month, the company’s ads VP Rob Goldman publicly contradicted special counsel Robert Mueller's findings, saying that the Russians' primary goal was not to sway the election, while Mueller said it was. The true story, Goldman indicated, wasn’t being reported “because it doesn’t align with the main media narrative of Tump [sic] and the election."
Facebook's sprawling and continuously evolving platform makes mitigating crisis difficult as urgent attention to one problem often allows new problems to emerge where the company is less vigilant. For instance, as the company pushed its groups products hard in recent years, the Kremlin-linked trolls saw this, built their own groups and weaponized them. And as Facebook focuses on its core product, digital malfeasance is spreading on its satellite apps. Hoaxes have thrived on WhatsApp in India, for instance, where misinformation has spread on the app, creating panic, and even leading to a death. And on Messenger, child porn has gone viral. These products aren’t immune to political exploitation, either.
“Empowering anyone to share thoughts and ideas with the world will, of course, lead to things we do not like,” one former Facebook employee told BuzzFeed News. ”These systems can be improved, but I'm not sure the problem can be ‘solved.’ A lot of this stuff is inherently going to exist in a user-generated content platform. Do we want more voices and the warts that come with it? Or less voices and less agency for the common person? It's an ideological trade-off that we must wrestle with as a society.”
But wrestling with such ideological questions while doing damage control on a platform that became an engine of discord during the last election is difficult. And it's even more difficult when you're doing it just a few months ahead of another crucial election. The timing is particularly inopportune. And even the top levels of Facebook aren’t under the illusion that they’re currently in position to stop what’s coming. As Sheryl Sandberg put it last Friday: “We’re definitely playing catch-up.” ●