There Was No Midterm Misinformation Crisis Because We've Democratized Propaganda

Years of algorithmically powered information warfare have ALREADY rewired our political discourse.

Despite months of foreboding warnings, transparency reports, and highly touted tech company “war rooms,” Election Day 2018 on the internet felt surprisingly quiet. According to an early accounting by the Department of Homeland Security, there was no sign of a viable infrastructure attack, no reports of coordinated hacking campaigns. The midterms weren’t thrown into tumult by a massive, hacked information dump or a believable deepfake or a viral disenfranchising meme. The gravest threat seemed to be familiar false new narratives and clumsy, easily debunkable hoaxes. The great viral misinformation epidemic feared by many — as far as we know right now — never came to pass.

“Our political conversations are now happening on an infrastructure built for viral advertising.”

And so today, platforms like Facebook, Twitter, and YouTube can breathe a sigh of relief having, as one columnist put it, “won this round.” Perhaps. But Election Day is a single data point in a long electoral calendar. And while there was no catastrophic online event, the lead-up to the 2018 midterms proved that online platform manipulation — and our fear of it — has firmly embedded itself into our national politics.

The fake news apocalypse we feared for the midterms already happened — back in 2016. We’re now living in its aftermath. And the misinformation, propaganda, and hyper-partisan news that has defined this election news cycle reveals an unsettling truth: that years of algorithmically powered information warfare have drastically rewired our political discourse, turning it ever more toxic and blurring the lines of reality.

Gotta hand it to Twitter, and also a little bit to Facebook (but not really to Instagram)—this website had a LOT less bullshit on it in the last 48 hours than it did on Election Day and Eve 2016. A lot. Pressure from citizens and journalists highlighting failures is working.

There’s mounting evidence that our increased reliance on online platforms for news and political debate has altered many people’s perceptions. “Our political conversations are now happening on an infrastructure built for viral advertising, on platforms that are purpose-built to generate engagement and amplify sensational content,” computational propaganda researcher Renee DiResta told BuzzFeed News. “A lot of the norms that exist in the real world — the way people talk to each other, the recognition that we are still talking to other human beings even if we disagree — they aren’t present on social networks.”

Since the 2016 elections, the feedback loop between obscure partisan internet communities and mainstream politicians has intensified and quickened. In the early days of the Trump administration, White House communications occasionally mined communities like Reddit’s /r/The_Donald forum for lowest-common-denominator viral memes to broadcast via the president’s Twitter feed. Today, the pathway from the online fringes to the mainstream is clear and powerful — a pipeline where the phrase “jobs not mobs” can go from an obscure viral tweet to a full-fledged party slogan in a matter of hours. The cycle is just as efficient in reverse, especially on the right, where Trump’s Twitter feed serves as an assignment editor for both a loyal press as well as the traditional media ecosystem, which funnels catchphrases like “the media is the enemy of the people” into the cultural lexicon.

“Real people are becoming more botlike.”

This polarized environment is a rich Petrie dish for increasingly sophisticated hyper-partisan operatives. For Geoff Golberg, a researcher who tracks political misinformation on Twitter, that means it’s getting harder to tell what’s authentic and what’s not. “People get hung up on bots, but it’s so much more than that,” Golberg told BuzzFeed News. “There’s all kinds of inauthentic accounts from automated spammers to sock puppets to human-run accounts that misrepresent themselves.” But it’s not just inauthentic accounts. Golberg’s network analyses of Twitter data reveal that, among many far-right and far-left influencer accounts, human and nonhuman users constantly interact with each other and often times leading to hostile interactions. And as inauthentic accounts grow more convincingly human, there’s evidence that human accounts have begun to adopt the mannerisms of automated Twitter users. “Real people are becoming more botlike, both in tweeting behavior and the way their profiles look, which only adds to the confusion,” Golberg said.

Here’s what it looks like when others continue to do Twitter’s work Meet the Iranian American Community of (fill in the ____ State) Astroturf groups are all over Twitter (across the political spectrum/world & extending far beyond politics) Twitter counts these accounts as MAUs https://t.co/j2oRJgJJ4v

In October, Facebook took down hundreds of domestic pages for participating in “coordinated inauthentic activity,” including hyper-partisan behemoth Right Wing News. Moves like that show that Facebook is grappling with its most manipulative propagandists. But they are also a reminder of the breadth and depth of the platform’s misinformation problem. As professor and disinformation researcher Jonathan Albright reported this week, the newly purged Right Wing News page wasn’t just big on Facebook, it was among the biggest pages on the internet. According to his analysis, the hyper-partisan page amassed over 1 billion interactions and “reported more engagement on Facebook over the past five years than the New York Times, The Washington Post, and Breitbart...combined.”

Even if pages like Right Wing News gamed Facebook to inflate its engagement numbers (as Albright suggests), the reach and influence of these deleted hyper-partisan pages (Facebook also purged the left-wing page, the Resistance, which had 240,000 followers) is significant. “It’s all a game,” Albright told BuzzFeed News. “It’s not necessarily all fake, but the design of Facebook’s platform is like a game in early beta that’s been running on a ‘real world’ mod.”

Meanwhile, organized misinformation efforts are increasingly moving behind closed doors. After the 2016 elections, Facebook CEO Mark Zuckerberg announced a recalibrated mission to “give people the power to build community to bring the world closer together” via a renewed focus on Groups. In theory, Groups would “give us that sense that we are part of something bigger than ourselves, that we are not alone, that we have something better ahead to work for.” In practice, it made organizing Pizzagate conspiracy groups as easy as starting a book club. As Albright recently observed, a number of coordinated influence groups that were once public have moved into private Facebook Groups.

“It’s the perfect storm,” Albright said of the “shadow organizing” communities. His analysis of the popular, baseless “Soros-funded caravan” meme shows that the earliest examples were found only in Facebook Groups, meaning that “sources of misinformation and origins of conspiracy seeding efforts on Facebook are becoming invisible to the public .” As a result, trolls and propagandists have more freedom to incubate and test narratives, making them stickier, more persuasive.

Meanwhile, some of the internet’s most malignant rhetoric has been spilling out into the real world with terrifying results. In late October, dozens of pro-Trump memes showed up plastered on the van of the man suspected of sending mail bombs to prominent Democrats, including the Clintons, Obamas, and George Soros. Mass shooters have left detailed internet histories documenting their online radicalization via the rank estuary of targeted harassment, bigotry, and polarization that’s developed online. Emboldened protesters and extremists are “stepping off the internet,” taking arguments and ideologies honed online to city streets and real-world clashes that are often violent and, in the case of Charlottesville, deadly.

The barrier between the online and physical worlds has grown porous. The lines are blurring — and not just between ideologies, but between truth and fiction. “One fascinating trend I’ve seen within the QAnon community, and with people who distrust the mainstream media generally, is the acceptance of disinformation as something that is good and useful,” Travis View, a researcher who follows the QAnon community closely, told BuzzFeed News. “Q believers expect to be lied to and to be used to spread lies. ... Many pro-QAnon and pro-Trump people don’t view social media as a debate platform, but as a battlefield for a war. And in war, disinformation is necessary for victory.”

This is very different from filter bubbles and siloed online communities stewing in their own realities. It’s not delusion, but something more insidious. “Increasingly, there’s a more fundamental disagreement going on,” View said. “Which is whether or not facts and falsehoods on social media matter.”

Our public square is being redefined by platforms that ruthlessly prioritize and reward sensationalism. The soapbox has been replaced by an advertising system — one that rewards our least dignified impulses and empowers anyone willing to embrace them. And the result is a transformation of political discourse and the establishment of a new insidious vernacular — of division, deceit, of victory at all costs. And since there appears to be little desire to rethink the incentives that govern these platforms (attention!), it’s not hard to see the last two years as a trial run for the next cycle.

“We’ve democratized propaganda, made gaming distribution the key skill required to reach and influence people,” DiResta said of the realignment. “We have a powerful, still-young infrastructure for speech and persuasion, and I don’t think we’ve adapted yet.”

If you want to read more about disinformation and the culture wars, subscribe to Infowarzel, a BuzzFeed News newsletter by the author of this piece, Charlie Warzel.

Topics in this article

Skip to footer