Why Can Everyone Spot Fake News But YouTube, Facebook, And Google?

And beyond that, why is it — after multiple national tragedies politicized by hoaxes and misinformation — that such a question even needs to be asked?

In the first hours after last October's mass shooting in Las Vegas, my colleague Ryan Broderick noticed something peculiar: Google search queries for a man initially (and falsely) identified as a victim of the shooting were returning Google News links to hoaxes created on 4chan, a notorious message board whose members were working openly to politicize the tragedy. Two hours later, he found posts going viral on Facebook falsely claiming the shooter was a member of the self-described "antifa." An hour or so after that, a cursory YouTube search returned a handful of similarly minded conspiracy videos — all of them claiming crisis actors were posing as shooting victims to gain political points. Each time, Broderick tweeted his findings.

Also, apparently Google is putting 4chan threads in their top story unit now? So, the number one hit for his name i… https://t.co/CuQx4w7dhn

Over the next two days, journalists and misinformation researchers uncovered and tweeted still more examples of fake news and conspiracy theories propagating in the aftermath of the tragedy. The New York Times' John Herrman found pages of conspiratorial YouTube videos with hundreds of thousands of views, many of them highly ranked in search returns. Cale Weissman at Fast Company noticed that Facebook's crisis response page was surfacing news stories from alt-right blogs and sites like End Time Headlines rife with false information. I tracked how YouTube’s recommendation engine allows users to stumble down an algorithm-powered conspiracy video rabbit hole. In each instance, the journalists reported their findings to the platforms. And in each instance, the platforms apologized, claimed they were unaware of the content, promised to improve, and removed it.

This cycle repeats itself after every major mass shooting and tragedy.

This cycle — of journalists, researchers, and others spotting — with the simplest of search queries — hoaxes and fake news long before the platforms themselves repeats itself after every major mass shooting and tragedy. Just a few hours after news broke of the mass shooting in Sutherland Springs, Texas, Justin Hendrix, a researcher and executive director of NYC Media Lab spotted search results inside Google's "Popular on Twitter" widget rife with misinformation. Shortly after an Amtrak train crash involving GOP lawmakers in January, the Daily Beast's Ben Collins quickly checked Facebook and discovered a trove of conspiracy theories inside Facebook's trending news section, which is prominently positioned to be seen by millions of users.

Google's 'Popular On Twitter' news feature is a misinformation gutter. Search for Devin Patrick Kelley just now sur… https://t.co/8YxgZjljlv

By the time the Parkland school shooting occurred, the platforms had apologized for missteps during a national breaking news event three times in four months, in each instance promising to do better. But in their next opportunity to do better, again they failed. In the aftermath of the Parkland school shooting, journalists and researchers on Twitter were the first to spot dozens of hoaxes, trolls impersonating journalists, and viral Facebook posts and top "trending" YouTube posts smearing the victims and claiming they were crisis actors. In each instance, these individuals surfaced this content — most of which is a clear violation of the platforms' rules — well before YouTube, Facebook, and Twitter. The New York Times' Kevin Roose summed up the dynamic recently on Twitter noting, "Half the job of being a tech reporter in 2018 is doing pro bono content moderation for giant companies."

Among those who pay close attention to big technology platforms and misinformation, the frustration over the platforms’ repeated failures to do something that any remotely savvy news consumer can do with minimal effort is palpable: Despite countless articles, emails with links to violating content, and viral tweets, nothing changes. The tactics of YouTube shock jocks and Facebook conspiracy theorists hardly differ from those of their analog predecessors; crisis actor posts and videos have, for example, been a staple of peddled misinformation for years.

This isn't some new phenomenon. Still, the platforms are proving themselves incompetent when it comes to addressing them — over and over and over again. In many cases, they appear to be surprised by that such content sits on their websites. And even their public relations responses seem to suggest they've been caught off guard with no plan in place for messaging when they slip up.

To give you an idea how ill-equipped Facebook and Google were at handling this issue yesterday: I got two conflicti… https://t.co/FhlMkd1NRf

All of this raises a mind-bendingly simple question that YouTube, Google, Twitter, and Facebook have not yet answered: How is it that the average untrained human can do something that multibillion-dollar technology companies that pride themselves on innovation cannot? And beyond that, why is it that — after multiple national tragedies politicized by malicious hoaxes and misinformation — such a question even needs to be asked?

Clearly, it can be done because people are already doing it.

The task of moderating platforms as massive as Facebook, Google, and YouTube is dizzyingly complex. Hundreds of hours of video are uploaded to YouTube every minute; Facebook has 2 billion users and tens of millions of groups and pages to wrangle. Moderation is fraught with justifiable concerns over free speech and bias. The sheer breadth of malignant content on these platforms is daunting — foreign sponsored ads and fake news on Facebook; rampant harassment on Twitter; child exploitation videos masquerading as family content on YouTube. The problem the platforms face is a tough one — a Gordian knot of engineering, policy, and even philosophical questions few have good answers to.

But while the platforms like to conflate these existential moderations problems with the breaking news and incident-specific, in reality they’re not the same. The search queries that Broderick and others use to uncover event-specific misinformation that the platforms have so far failed to mitigate are absurdly simple — often it requires nothing more than searching the full name of the shooter or victims.

In battling misinformation the big tech platforms face a steep uphill battle. And yet, it's hard to imagine any companies or institutions better positioned to fight it. The Googles and Facebooks of the world are wildly profitable and employ some of the smartest minds and best engineering talent in the world. They're known for investing in expensive, crazy-sounding utopian ideas. Google has an employee whose title is Captain of Moonshots — he is helping teach cars to drive themselves — and succeeding!

Look, of course Google and Facebook and Twitter can't monitor all of the content on their platforms posted by their billions of users. Nor does anyone really expect them to. But policing what's taking off and trending as it relates to the news of the day is another matter. Clearly, it can be done because people are already doing it.

So why then can't these platforms do what an unaffiliated group of journalists, researchers, and concerned citizens manage to find with a laptop and a few visits to 4chan? Perhaps it's because the problem is more complicated than nonemployees can understand — and that's often the line the companies use. Reached for comment, Facebook reiterated that it relies on human and machine moderation as well as user reporting, and noted that moderation is nuanced and judging context is difficult. Twitter explained that it too relies on user reports and technology to enforce its rules, noting that because of its scale "context is crucial" and it errs on the side of protecting people’s voices. And YouTube also noted that it uses machine learning to flag possibly violative content for human review; It said it doesn't hire humans to "find" such content because they aren't effective at scale.

If they can't see it, they aren't truly looking.

The companies ask that we take them at their word: We're trying, but this is hard — we can't fix this overnight. OK, we get it. But if the tech giants aren't finding the same misinformation that observers armed with nothing more sophisticated than access to a search bar are in the aftermath of these events, there's really only one explanation for it: If they can't see it, they aren't truly looking.

How hard would it be, for example, to have a team in place reserved exclusively for large-scale breaking news events to do what outside observers have been doing: scan and monitor for clearly misleading conspiratorial content inside its top searches and trending modules?

It’s not a foolproof solution. But it’s something.

Got a tip? You can contact me at charlie.warzel@buzzfeed.com. You can reach me securely at cwarzel@protonmail.com or through BuzzFeed's confidential tipline, tips@buzzfeed.com. PGP fingerprint: B077 0E9F B742 ED17 B4EF 0CED 72A9 85C4 6203 F09C.

And if you want to read more about the future of the internet's information wars, subscribe to Infowarzel, a BuzzFeed News newsletter by the author of this piece, Charlie Warzel.

UPDATE

This post has been updated with responses from Twitter and YouTube.


Topics in this article

Skip to footer