Facebook Is Fighting Terrorism With Artificial Intelligence, But Criticism Persists

"We want Facebook to be a hostile place for terrorists," Facebook says.

On Thursday Facebook responded to criticism from European leaders who, prompted by a recent series of terrorist attacks, have been demanding that social media companies do more to fight terrorists who organize on their networks. In a lengthy blog post, Facebook offered some new information about how it combats terrorist activity online, but it didn’t specify many details.

The company said it’s using artificial intelligence to preemptively block images and videos containing terrorist content from appearing on its service, and it’s working on systems that will help it take cohesive action against terrorists operating across its family of apps, including Instagram and WhatsApp. Facebook also employs 4,500 human community operations people, who review reports of terrorist activity on the platform and take action. It’s planning to hire another 3,000 this year. But how exactly Facebook’s anti-terrorism systems operate remains a mystery.

“Our stance is simple: There’s no place on Facebook for terrorism,” the company said. “We remove terrorists and posts that support terrorism whenever we become aware of them.”

The forceful response was clearly meant to ward off criticism from politicians such as British prime minister Theresa May and French president Emmanuel Macron, who met earlier this week to discuss "a joint campaign to ensure that the internet cannot be used as a safe space for terrorists and criminals.”

Facebook has said it’s already doing what May wants — she’s demanded internet companies “deprive the extremists of their safe spaces online” — so it’s unclear if Thursday’s response will satisfy her. May has opposed freely available end-to-end encryption in the past, which makes communication essentially inaccessible to third parties and is a key feature of the Facebook-owned WhatsApp. Encryption is also available in Facebook Messenger. In its post, Facebook gave no indication it was reconsidering its use of encryption.

Facebook did not immediately respond to a BuzzFeed News request asking for more detail on its anti-terrorist systems.

The details Facebook offered on its approach to fighting terrorism — executed by a combination of AI, humans, and partnerships with tech companies, governments, and NGOs — provided a window into the magnitude of the problem it faces. Terrorists banned from Facebook regularly reappear by creating fake accounts, the post said. They update their tactics to evade detection, which makes the fighting them fairly difficult. “This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too,” the post said. “We’re constantly identifying new ways that terrorist actors try to circumvent our systems — and we update our tactics accordingly.”

Those fake accounts are coming down faster than before, Facebook said. But the company wasn’t ready to declare victory yet. Not even close. “We’ve been cautious, in part because we don’t want to suggest there is any easy technical fix. It is an enormous challenge to keep people safe on a platform used by nearly 2 billion every month, posting and commenting in more than 80 languages in every corner of the globe,” Facebook said. “There is much more for us to do.”

Topics in this article

Skip to footer