Facebook Must Either Innovate Or Admit Defeat At The Hands Of Fake News Hoaxsters

The company must either innovate or admit defeat at the hands of fake news hoaxsters and hyper-partisan click farmers.

It's taken nearly two years, but this week clearly showed how Facebook's approach to battling fake news has failed.

Only a fews days after the company laid off the human editors who managed its Trending Topics and related news articles, a false story about Fox News host Megyn Kelly became a top Trending Topic on the platform for nearly 24 hours. Facebook's algorithm saw the story was being widely shared and talked about so it added it to the trending sidebar, thereby promoting the false story to potentially millions more people.

It's not a surprise that a false story was trending in the first place. A research project I conducted in 2014 found that false rumors and hoaxes attract more engagement on Facebook than the related debunkings. This is particularly true when it comes to fake news items, which are hoaxes published by websites that look like real news sites but are created to fool people. The data I found showed that fake news could drive massive numbers of likes, shares, and comments on Facebook, while any attempts at debunking it would receive a fraction of the engagement. (Remember that Facebook is now the biggest driver of traffic to news websites.)

For example, this October 2014 hoax claiming the Earth will experience six days of darkness had racked up nearly 900,000 likes, shares and comments on Facebook. The combined Facebook interactions of seven different debunkings from places such as Snopes and the Huffington Post amounted to a little more than 136,000.

Nearly two years later, fake news sites continue to flourish on Facebook. Just look at the two Canadian teenagers I found who are making thousands of dollars just from hoaxes about Justin Trudeau. Or this coordinated effort to seed fake articles about terrorist attacks in Facebook groups in order to drive people to malicious websites.

Fake news sites are now also joined by a new breed of hyper-partisan websites and Facebook pages that generate huge engagement on Facebook with content that's often false or deeply misleading. One of these sites, EndingTheFed.com, published the false Megyn Kelly article that Facebook made a Trending Topic.

Facebook's algorithm-only approach falls victim to the biases of the platform, which themselves are a reflection of our own human biases.

This is all happening close to two years since Facebook announced it would enable people to flag content in their News Feed that was false or deliberately misleading. The goal was to blunt the impact of fake news. It failed.

Then, yesterday in Italy, a student asked Mark Zuckerberg if Facebook sees itself as an editor when it comes to news on its platform. "No, we are a tech company, not a media company," he said.

These three things — reliance on users to flag fake news, firing Trending Topic editors in favor of an algorithm, and Zuckerberg's insistence that Facebook is not a media company — together show how Facebook has been caught in a fake news trap of its own making.

The company is faced with a choice to either innovate or admit defeat at the hands of fake news hoaxsters and hyper-partisan click farmers.

Here's why: Facebook insists it's not a media company, which means it does not want editors making calls about what is and isn't news, or choosing which sources to highlight. So goodbye, human editors. People at Facebook have also told me they do not want to blacklist even the worst of the fake news websites, since that in their view is akin to editorial oversight and censorship. (The company is probably even more wary of blacklists after its Trending Topics editors were accused of suppressing conservative websites.) Meanwhile, Facebook is fine with users flagging fake content, but there's no evidence people are doing this to any useful degree — and in fact this feature can easily be used as a weapon to silence people.

All of this means Facebook's only option — and clear preference — is to develop an algorithm that uses signals gathered on its platform to determine which topics should be labeled Trending, and which articles within those topics should be highlighted. This needs to happen while also making sure the algorithm identifies and eliminates fake, false, or defamatory topics and stories.

The flaw with this current approach is Facebook itself. The algorithm has a huge reliance on signals such as the likes, shares, comments, and reading time that a posted article gets — and when it comes to news, Facebook is incredibly biased.

The false story about Kelly is a perfect example. Ending the Fed publishes highly partisan content that is often totally false or misleading. But it's precisely because of how partisan it is, and because of the way it writes headlines and packages its content for Facebook, that it gets huge engagement on the platform.

False content often gives off great signals on Facebook.

John Herrman wrote about the rise of these hyper-partisan political websites and Facebook pages for the New York Times Magazine. They all take advantage of the fact that people are more likely to positively engage with and share information that aligns with their beliefs. Echoing what the 16-year-old kid running a profitable fake news operation told me, these sites tell people what they want to hear. Or, more accurately, they aggressively and divisively cater to existing beliefs and keep people in a filter bubble, often thanks to information that is false or incredibly misleading.

This kind of content quite literally makes people feel good when they read it because it reinforces what they believe. And so they like it, and share it, and email it to friends, and post enthusiastic comments about it on Facebook. Then their friends with the same beliefs do the same.

Any Facebook algorithm looking for news stories with strong engagement signals is going to surface these stories, and they are going to look great. The algorithm might check how many other sites have published a story about the same thing — multiple sources, right! — and it will find lots of other articles, because hyper-partisan sites and fake news sites constantly republish (or steal) each other's content. There's a good chance the copycat stories may be doing well on Facebook, too.

Total bullshit gets to the top of Trending Topics on Facebook because it has no editors, a flawed algorithm, and a weak product.

That's how total bullshit gets to the top of Trending Topics, courtesy of an algorithm. And why it will probably happen again and again.

Facebook's algorithm-only approach inevitably falls victim to the biases of the platform, which themselves are a reflection of our own human biases. It's ironic that one solution is to apply more humans to the problem, but the Trending Topic editors had previously proven fairly effective at keeping fake stories off the sidebar.

At this point, it's possible that Facebook may decide Trending Topics isn't worth the effort and controversy and will kill the product.

There is, however, one alternative positive scenario I could see unfolding as a result of Facebook's doubling down on the Trending Topic algorithm.

If the company is truly committed to offering a quality Trending Topic (and News Feed) experience, then its only option is to make massive strides in the detection and analysis of the factual qualities of news articles. Developing what would likely be the world's first algorithm to do this job with accuracy and consistency will require significant engineering resources. But it's what's necessary to actually stop Facebook from being the world's biggest platform for false and fake news — and to get there without editors. Right now Facebook has no editors, a flawed algorithm, and a weak product.

The past two years have seen Facebook articulate what it will and won't do when it comes to fake news. The question now is whether it actually cares enough to make its approach work.

Skip to footer