On Thursday evening, Cyrus Massoumi published a post on his conservative news website, Mr. Conservative, that he knew could cause trouble for his business. The post, written as an open letter to his readers, warned that Facebook could censor conservative-leaning news as a result of its efforts to remove misinformation from the platform.
Massoumi shared it with the more than 2 million fans of his Facebook page, and within minutes his worst fears seemed to come true. Traffic began plummeting as the number of people being referred from his Facebook page fell off sharply. He believed Facebook was taking action against his page.
“This is an algorithmic execution,” he wrote in a Skype message to BuzzFeed News at the time.
Then he saw that three earlier posts on his Facebook page had disappeared without warning or explanation.
“The way to censor that article if you specifically wanted to without deleting it would be to delete the previous posts,” he said.
Massoumi’s experience is one of a series of recent content takedowns and bans imposed by Facebook and Google that have sparked concerns from publishers and activists about how these dominant platforms are applying their policies, and about the level of transparency they offer in explaining their decisions.
In response to a BuzzFeed News inquiry, a Facebook spokesperson said the company will aim “to do better.” They also emphasized that censorship played no role in the actions, and said the bans were in fact a result of automated systems meant to thwart spammers and other bad actors.
In Massoumi’s case, he did not receive a message from Facebook about the removed posts. At the time he assumed he was being censored, especially given the timing of the removals. Similarly, the Russian-government-funded broadcaster RT accused Facebook of taking politically motivated action when its largest Facebook page was given a temporary posting ban as a result of what the social network said was a copyright violation. Over the weekend, the Alt National Parks Service Facebook page, which sprung up in opposition to Trump’s policies, was handed a temporary ban on new likes. That also caused some to accuse Facebook of censorship.
A Facebook spokesperson told BuzzFeed News all of the above actions were taken by automated systems rather than by the community standards team that evaluates content for hate speech, graphic violence, and other violations.
“Facebook is a platform for all ideas,” said the spokesperson, who spoke on the condition that they not be named. “Our mission, and business, relies on giving people of all different voices and opinions a place to share. We're continuously working to improve how we serve everyone in our community — from better communication to more effective and accurate systems — and will learn from these experiences to do better.”
In the case of Google, last week it announced that it took action against 340 websites on its AdSense platform after reviewing a total of 550 sites “suspected of misrepresenting content to users.” When asked to disclose the list of sites and/or publishers, a Google spokesperson said they don't comment on individual cases.
However, after BuzzFeed News exposed a network of more than 40 sites that published hundreds of fake news articles in 2016, a Google representative emailed to say they had removed those specific sites from AdSense. When asked why they commented on that instance but not others, the spokesperson said they reached out to correct the impression that the sites were still in AdSense.
Facebook has close to 2 billion global users, and Google powers an ad network that earns revenue for close to 2 million websites. Performance on one or both of their platforms will make or break a content business — which means each ban or removal is treated as a life-or-death scenario by publishers.
Scrutiny of Facebook's and Google’s actions is even more intense now that both companies have initiatives aimed at stopping the spread of online misinformation and deceptive content. American conservatives in particular have expressed concerns about the possibility of censorship. Massoumi raised the possibility of his site being targeted in a November interview with BuzzFeed News.
But even the liberals pushing Facebook and Google to crack down on what has often been pro-Trump fake news say they find the lack of transparency troubling. Angelo Carusone, president of the liberal watchdog group Media Matters, told BuzzFeed News it’s a struggle to get information from the companies. When his group sent Google a list of sites it considers fake news, he says the company did not follow up to say what, if any, action had been taken.
“There is a lack of transparency and unwillingness to at least identify the sites they take action against,” he said.
He said Google has been more of a challenge to deal with than Facebook.
“I think Google is doing worse in a weird way, even though they have better metrics to tout,” he said, citing its recent report of banning 340 sites. “I believe Facebook is genuinely committed [to fighting fake news] but has internal confusion and business concerns they are grappling with.”
Facebook and Google both say they are trying to balance a desire to act quickly and at a massive scale with the need to be transparent and communicate clearly with publishers and other partners.
“We’re talking about the scale of the internet here, and these bad actors move quickly,” a Google spokesperson said. “That’s what we’re trying to fight against.”
So just as automated systems like algorithms decide which content rises to the top of the News Feed and Google results, they can also remove content, ban pages and ads, and take actions that keep content and revenue out of the hands of publishers. When this is done without notice or a clear explanation, people worry about censorship and malicious intent on the part of platforms.
On Jan. 18, the broadcaster RT received a message from Facebook saying a temporary posting ban was a result of a copyright claim made by Current Time, a Russian-language broadcaster funded by the US government. But Current Time publicly denied that it registered a complaint. Within roughly 24 hours, Facebook restored RT’s posting privileges, but it did not immediately explain why RT was banned. A Facebook spokesperson now says RT was not the only publisher affected by the system error at the time.
The spokesperson also said the ban on new likes for the Alt National Parks Service page was instituted because it received a significant number of new likes in a very short period of time. That can trigger a temporary ban, since malicious pages often use automated methods that cause a spike in likes, according to Facebook.
After the ban was lifted, the Alt National Parks Service posted to say it had been put in place because of “complaints” registered with Facebook. The spokesperson said this was not the case, but commenters on the page continue to speculate about who was making the complaints.
The three posts Massoumi said were removed from his page included one post about Matt Damon expressing hope that Trump would be successful, another about the Mexican president canceling a meeting with Trump, and a third about recent insults directed at Melania Trump. One was restored without notice the next day.
The Facebook spokesperson said the post that was removed and later reinstated was initially taken down in error by an automated system designed to thwart spammers. (The company said it did not have any record of action being taken on the other posts Massoumi said disappeared from his page.)
It wasn't the first time Massoumi had seen a post disappear and then reappear without explanation. On Jan. 20 he told BuzzFeed News via Skype that a Facebook post about the Trump inauguration was removed that day and later reinstated. He also said conservative-oriented posts about Muslims had recently been removed by Facebook’s community standards team due to what they said were hate speech violations. As a result, Massoumi, who was raised in a Muslim family, said he no longer posts about Muslims.
“I’m self-censoring, but my fans think they are getting everything authentically,” he said. “So, it’s worse than 1984, because you think you are getting real news, when in actuality I weigh everything against the risk of Facebook employees flagging it.”
In addition to concerns about humans reviewing his posts, Massoumi says he now has to worry about Facebook’s automated systems going awry and removing posts or imposing bans.
“I assumed it was much more nefarious,” he said. “In fact it’s entirely random.”