Facebook Knows That Adding Labels To Trump’s False Claims Does Little To Stop Their Spread

Internal data shows that labels on President Trump’s posts decrease reshares by about 8%. They still account for some of the most engaging posts on the platform.

The labels Facebook has been putting on false election posts from President Donald Trump have failed to slow their spread across the platform, according to internal data seen by BuzzFeed News.

In the aftermath of the 2020 US presidential election, Trump has repeatedly spread false information questioning President-elect Joe Biden’s victory — and been rewarded with massive engagement on Facebook. The company has attempted to temper this by adding labels to the false claims directing people to accurate information about the election and its results.

But this has done little to prevent Trump’s false claims from going viral on Facebook, according to discussions on internal company discussion boards. After an employee asked last week whether Facebook has any data about the effectiveness of the labels, a data scientist revealed that the labels — referred to as “informs” internally — do very little to reduce them from being shared.

”We have evidence that applying these informs to posts decreases their reshares by ~8%,” the data scientists said. “However given that Trump has SO many shares on any given post, the decrease is not going to change shares by orders of magnitude.”

The data scientist noted that adding the labels was not expected to reduce the spread of false content. Instead, they are used “to provide factual information in context to the post.”

"Ahead of this election we developed informational labels, which we applied to candidate posts with a goal of connecting people with reliable sources about the election," Facebook spokesperson Liz Bourgeois said in a statement, adding that labels were "just one piece of our larger election integrity efforts."

Do you work at Facebook or another technology company? We’d love to hear from you. Reach out at ryan.mac@buzzfeed.com, craig.silverman@buzzfeed.com, or via one of our tip line channels.

Discussions within Facebook about the efficacy of fact-check labels reveal the extent with which the world’s largest internet platforms have struggled to handle the unprecedented flood of lies from the outgoing president. Ahead of the election, both Facebook and Twitter clarified their content policies and practices notifying the public that they would be appending labels to misleading posts to point people to more accurate information about the race.

Twitter has been more aggressive in limiting the spread of misleading election information, and in some cases prevented Trump’s tweets from being liked or retweeted. Last week, the company said it had labeled about 300,000 tweets for misleading information about the election, while restricting more than 450 from being liked or retweeted.

“We saw an estimated 29% decrease in Quote Tweets of these labeled Tweets due in part to a prompt that warned people prior to sharing,” the company wrote in a blog post, referring to a practice in which a user shares a tweet while adding their own commentary on top.

Facebook, on the other hand, did not implement measures to prevent users from engaging with Trump’s election-related posts. Even with a label, people were still allowed to share or like Trump’s post.

Earlier this year, Facebook took down a post from the president, but only because it violated the company’s rules around COVID-19 misinformation.

“We have a responsibility to help maintain the integrity of elections to clear up confusion and to provide credible, authoritative information when we can,” Facebook CEO Mark Zuckerberg told employees during a companywide meeting on Oct. 15. While he discussed the use of labels during that talk, he made no mention of efforts to limit the spread of Trump’s election misinformation.

The 8% decrease of shares due to an election label is worse than a similar effort by Facebook to add context to false content. In 2017, the company claimed that it reduced the spread of content by 80% once a fact-checker had labeled it false. Facebook labels but does not reduce the reach of false election content from politicians.

Facebook does not allow its fact-checking partners to evaluate the content from politicians like Trump and instead created a set of labels meant to refer people to credible election information, as opposed to issuing a direct fact-check.

A Facebook post from President Trump falsely claiming "I won the election!"

The labels did little to deter Trump or slow the spread of his disinformation. On Sunday night and Monday morning, Trump twice posted, “I won the Election!” The two false posts attracted more than 1.7 million reactions, 350,000 comments, and 90,000 shares in total.

Those posts, along with another from Trump on Sunday that doubted the election outcome, accounted for the three most-engaged posts on all of Facebook in the past 24 hours, according to CrowdTangle, an analytics platform owned by Facebook.

Trump’s posts, and Facebook’s decision to leave them online, sparked public criticism and alarm among Facebook employees.

“Is there any induction that the ‘this post might not be true’ flags have continued to be effective at all in slowing misinformation spread?” asked a Facebook employee on the company’s internal message board. “I have a feeling people have quickly learned to ignore these flags at this point. Are we limiting reach of these posts at all or just hoping that people will do it organically?”

One employee pointed to the high number of shares on one of Trump’s posts falsely claiming he had won, and said “it doesn’t feel like people are being deterred all that much by our mild dosage of context.”

“The fact that we refuse to hold accounts with millions of followers to higher standards to everyone else (and often they get lower standards) is one of the most upsetting things about working here,” said another employee.

A Facebook researcher working on civic integrity said the company isn’t able to measure how people react to the labels, and pointed to the data scientist’s information about the negligible effect on shares. The researcher also said that the company had no other options given its policy of not fact-checking politicians.

“Will also flag that given company policy around not fact-checking politicians the alternative is nothing currently,” they said.

UPDATE

Added more information about Facebook's fact-checking labels and the downranking of false content.


Topics in this article

Skip to footer