France Has Recruited Facebook To Help Solve Its Anti-Semitism Problem

“Don’t ask the fox how to take care of the chickens,” one privacy advocate said about Facebook’s cooperation with the French government.

PARIS — When French President Emmanuel Macron decided to combat hate speech online he turned to the very person many blame for its rapid spread — Facebook CEO Mark Zuckerberg.

On Monday, a lawmaker from Macron’s party said she would present a first draft of a bill to punish social media platforms that fail to quickly take down hate speech. France — home to the world's third-largest Jewish population — is in the middle of what Macron has described as being the worst period of anti-Semitism since World War II, and much of it is playing out online.

Calling for a “collegial approach” to regulation last November, Macron announced what he called an “unprecedented field experiment” with Zuckerberg to take French officials behind the scenes of content moderation process. After reviewing the tools Facebook uses to evaluate and take down offensive and violent content, Macron said, policymakers would work with Facebook to “jointly develop specific and concrete proposals to fight offensive and hate content.”

The proposed law will put France on the front lines of a global struggle to define who regulates the internet — local lawmakers or the giant American tech companies. It could influence the rest of Europe, where an avalanche of new rules for social media platforms is in the works. The UK is already considering a proposal along similar lines, and EU policymakers are working on rules to govern everything from copyright to content promoting terrorism.

In interviews with BuzzFeed News, Facebook's vice president of public policy, France's technology minister, and the French lawmaker who drafted the proposal revealed the struggle to decide how much power the social media platforms will have to censor speech in France, estimated to become Facebook’s largest market inside the EU after Brexit with more than 30 million users.

These rules all wrestle with fundamental questions about who governs speech in a time when technology has made borders essentially meaningless. How can a country like France — where hate speech is a crime — enforce local laws through giant companies based in the US, where there are much broader protections for freedom of speech? Platforms are also under pressure to get better at taking down terrorist content, but — at a time when authoritarian regimes regularly accuse their political opponents of supporting terrorism — who decides what that is? Local judges, Facebook employees, or an algorithm?

Many policymakers believe they have no choice but to rely on companies like Facebook, where the volume of content is so great they couldn’t regulate it directly even if they wanted to. But with Facebook accused of being complacent over hate speech during a genocide in Myanmar, and the manipulation of the US elections, some free speech, anti-racism, and equality advocates are questioning Macron’s decision to invite Facebook to help craft policies intended to police it.

Arthur Messaud, a lawyer with the French internet freedom group Quadrature du Net, told BuzzFeed News that Facebook has demonstrated it cannot be trusted.

“Don’t ask the fox how to take care of the chickens,” he said.

Just after Macron announced his partnership with Zuckerberg last November, the Yellow Vest protests exploded onto streets across France. The movement tapped into deep-seated anger at economic stagnation among the French middle class, but it transformed into a political force through Facebook groups that were also hotbeds of conspiracy theories, anti-Semitic abuse, and anti-vax misinformation.

As BuzzFeed News has reported, a 2018 change in the algorithm for Facebook’s News Feed likely played a key role in galvanizing the protests by prioritizing news from local publishers and posts from friends and family.

The Yellow Vest movement helped unleashed a new wave of anti-Semitic abuse. President Macron was called a “whore of the Jews” by commenters who seized on his work for the Rothschild Bank. People in the Yellow Vest groups also shared videos by Dieudonné, a comedian who has been convicted of many crimes over the past 15 years, including advocating terrorism and denying the Holocaust, yet still has a page on Facebook. Dieudonné has become close to figures around France’s historic far-right party, the National Front — recently renamed the National Rally — many of whose supporters joined the Yellow Vest protests.

Macron’s government was already at work on rewriting rules for online speech before the Yellow Vest protests began. And, like many world leaders, Macron had a personal interest in tackling the problem. Just before the vote that put him in office, a cache of 20,000 hacked emails was dumped online as part of a misinformation campaign that WikiLeaks and alt-right circles in the US then tried to promote on Facebook and other social media sites.

Macron won the 2017 election comfortably anyway, and his government swiftly adopted a “fake news” law that allowed courts to ban information judged “manipulative” and forced social media platforms to disclose who was paying to advertise political content. Macron also created a commission to study online hate speech, with cochairs including Gil Taieb, vice president of France’s most influential Jewish organization, known as the CRIF. The panel recommended an overhaul of French laws as well as a push for EU-wide hate speech regulations.

France has experienced a growing problem with hate incidents in recent years, including a 74% jump in anti-Semitic incidents reported between 2017 and 2018. They include the murder of an elderly Jewish woman in Paris by a neighbor who was heard shouting “Allahu Akbar” after the crime, and the vandalism of a Jewish cemetery in Alsace last month in which dozens of headstones were covered with swastikas.

Mounir Mahjoubi, France’s technology minister, told BuzzFeed News that French anti-Semitism has “multiple roots.” On the one hand, radical Islamist ideology has found some support in communities of immigrant descent. The French far right has become the strongest party against Macron’s party in upcoming elections for the EU's Parliament. Its leader, Marine Le Pen, has disavowed anti-Semitism, but the party was long led by her father, Jean-Marie Le Pen, who was convicted of questioning the Holocaust.

“They support each other, the extreme right, plus radicalized Muslims,” said Mahjoubi, who is of Moroccan descent and was elected to France’s national assembly from a district that has one of the largest populations of Jews.

Mahjoubi, who is also running for Paris mayor, found out firsthand how bizarre the Yellow Vest online world had become in early January. A YouTuber who goes by the name Isadora Duncan and bills himself as a “Yellow Vest journalist” ambushed him on on a Paris sidewalk. He tried to ask Mahjoubi about whether the protests were the “goy cattle rebelling,” an apparent reference to a conspiracy theory that says Jewish scripture calls for non-Jews to be slaughtered like animals.

Zuckerberg wanted to project a new image to the world when he visited Europe last year. He wanted to appear contrite and cooperative, suddenly eager to partner with governments to safeguard the internet.

The company was scrambling to contain fallout on the continent from the news that it had allowed the political consulting firm Cambridge Analytica to steal data from millions of users. The EU had just implemented new privacy rules that targeted breaches just like this, and Facebook had bitterly fought their adoption.

In November, Zuckerberg published a long manifesto titled “A Blueprint for Content Governance and Enforcement.” It actively urged lawmakers to regulate social media, declaring, “I do not believe individual companies can or should be handling so many of these issues of free expression and public safety on their own.”

The Experimental Group developed with Macron, Zuckerberg said, would test the company’s new commitment to a smarter regulatory process. Soon, he said, “we'll also work with other governments as well,” including the EU to craft what he called “the right regulations.” But it became immediately clear that there were certain types of regulation that Zuckerberg couldn't accept: ones that were “overly prescriptive about how we must technically execute our content enforcement,” which he said would stop the platform from “doing our most effective work.”

So when the series of anti-Semitic incidents made hate speech a top priority for Macron this February, members of his government saw Facebook as an eager partner to fight it.

“I think Facebook is really the most willing player at the moment,” said Frederic Potier, a member of the Experimental Group working with Facebook, in an interview with BuzzFeed News.

Potier directs the French government’s nondiscrimination agency known as DILCRAH. Potier said the team had just visited Facebook’s content moderation center for Southern Europe, which a Facebook official told BuzzFeed News is run by an Austrian contractor called CCC out of a facility in Barcelona and managed from a Facebook office in Dublin. The place he saw sounds like it was a world away from other reports of Facebook content moderation centers, such as the horrific conditions described in a recent article by the Verge.

“I was very impressed by the moderation system,” Potier said. “They have really strong guidelines, training programs, French[-speaking] moderators.”

Macron and Zuckerberg said the Experimental Group would be granted unprecedented access to Facebook data. Some news reports said French officials would be “embedded” inside content teams, and a few outlets reported that that government officials would have access to Facebook’s most closely guarded secret: the News Feed algorithm, the formula that determines what content is promoted to users.

But Richard Allan, Facebook’s vice president of public policy, described the project as more like an extended seminar than a proper audit in a phone interview this week with BuzzFeed News.

“Usually you’d have a one-hour meeting [with lawmakers] and you explain what you do ... and then off you go, and here we’ve spent several days together,” Allan said. He said Facebook has “actually shared with them more data than we’ve shared with any similar group of policy members before,” but it did not include the News Feed algorithm.

In fact, he said, French officials have “never asked for access to the News Feed algorithm and that’s not been a focus of the discussion.” He also said Facebook did not make any changes to the algorithm in response to the Yellow Vest protests, though he said any anti-Semitic or other content that violates Facebook rules would have been taken down.

“The focus of the discussion has been 'You receive reports about hate speech, how do you process them, how do you use automated systems to try and detect hate speech?'” Allan said.

But Allan was emphatic that this partnership was not just a clever new lobbying strategy intended to charm French officials into giving Facebook a specific kind of regulations.

“This is all open, above board, entirely transparent, been announced publicly. I actually think it’s laudable that a government that intends to regulate something goes and finds out about the thing that they’re going to regulate,” Allan said. “There isn’t any particular outcome [Facebook is seeking] other than we would like their regulation to be effective in reducing hate speech and practical.”

Mahjoubi, the technology minister, echoed this point, telling BuzzFeed News that Facebook is not getting a “special seat at the table for negotiating the law to come.” The member of France’s national assembly drafting the law is also not directly involved in the Experimental Group.

“Facebook has not been invited alone in the conversation. The conversation is open to all actors … Facebook stepped forward and said they were ready to be even more transparent,” Mahjoubi said.

By contrast, he said, Twitter ignored invitations to attend public consultations on the proposal, and only came to “knock at the door when we started the experimentation with Facebook.” Mahjoubi said he was pleased by the conversation he had after his first meeting with a senior Twitter representative — but that took place only last week. Mahjoubi repeated the sentiment of many officials who said Twitter was lagging behind the other big tech companies on moderation, which he conceded may be partly because it is not as well resourced as Facebook and Google.

“That is one of the issues of Twitter, to have really small HR worldwide to tackle these issues,” Mahjoubi said. “Twitter in France is nearly [just] one person and a half — they don’t have the capacity to talk with us at all levels.”

Le Figaro reported Twitter had 32 employees in France as of 2017, while Facebook had 108.

Twitter spokesperson Amy Rose Harte responded to Mahjoubi’s comments in an email to BuzzFeed News, “Twitter’s public policy team has been highly engaged with French government and French civil society since we began working in Europe seven years ago. In fact our experiences in France and those conversations have helped inform the 70-plus product and policy changes we have made in the last two years alone.”

Many of France’s antidiscrimination NGOs are supportive of the law’s goals, and some are pleased with Facebook’s engagement.

Taieb, the CRIF official who cochaired the government’s online hate speech commission, told BuzzFeed that he was satisfied that some companies “showed a lot of goodwill, and we note today that Facebook makes efforts.”

But some groups are frustrated that the Macron administration hasn’t moved faster on hate speech. Some advocates also find it hard to believe that Facebook is truly sincere about wanting to help lawmakers tackle the problem.

“They are working together with Facebook, so I hope the [company’s] lobby[ing] won’t be strong enough to slow down the law,” said Sacha Ghozlan of the National Association of Jewish Students. “We got used to see on the virtual world people saying, ‘kill the Jew,’ ‘death to the Jew,’ ‘evil Jew,’ for many years — now we see it on the street … We didn’t level enough pressure [on] social media … they are responsible for what is happening in our society.”

The cultural gap between US-based social media giants and European lawmakers on hate speech remains vast. Hate speech is a crime in many European countries, as are statements like claiming that the Holocaust didn’t happen. The US has a more absolute commitment to freedom of speech, so its laws generally only punish bias accompanied by an action other than speech — like refusing to hire a member of a specific race for a job, or assaulting someone for belonging to a particular group.

As French lawmakers see it, Facebook has been getting away with allowing people to break its national laws. Allowing platforms to write their own “community standards” and set up their own enforcement procedures have made a joke of hate speech policies set by the French government.

“These community rules should be the law,” Mahjoubi said. But it was important to understand Facebook’s internal processes because he ultimately still wants Facebook to be responsible for how these laws are interpreted by the people who review individual posts. He wants to require that humans — not a computer program — makes the final decision over what’s allowed, people who “understand in a French context and a cultural context.”

This has to be done by the companies, Mahjoubi said, because courts are just too slow.

“What we expect is [action] within minutes and hours, and for that you have to put the responsibility on the platforms,” he said. Courts just can’t move that fast.

“People who hate other people evolve a lot, the way they express the hate changes a lot,” he said. Even the fastest court proceedings regularly take several months, he said, and that’s “not the speed of the internet.”

Macron has given the job of writing a first draft of the legislation to a legislator from Paris named Laetitia Avia.

Avia told BuzzFeed News that the law will be based on legislation adopted by Germany in 2017 known as NetzDG. Under the law, fines of up to $60 million can be imposed on platforms that fail to take down hate speech or other illegal content within 24 hours.

A UN official charged with defending freedom of speech criticized the law before it was adopted, arguing that ordering censorship without court review threatened fundamental rights. He also warned that the platforms could also take down politically protected speech in the scramble to comply with the tight time frame.

So to guard against these problems in France, Avia said, her proposal would only require platforms to act within 24 hours for “manifestly illegal” content that the platforms should have no trouble concluding violates the law, while allowing more time for posts that fall in a “gray zone” requiring a greater evaluation of the context or the law.

The law is designed to target what Avia called “easy hate speech,” content that is so clearly offensive the platforms should have no trouble spotting it. To illustrate, she told a story familiar to many Twitter users. She said she flagged a tweet calling her a “negress owned by Jews who protects faggots.” She was astonished to get a notice from Twitter saying the tweet didn’t violate community rules. Even a company official later told her this was a “mistake,” Avia said, but she’s had this experience many times.

“It’s always a mistake,” she said. “It wasn’t the first time, and not the last. After [the law is voted on in] May it’s going to be the last time.”

This kind of law works in Germany, she said. There have been no fines levied since the NetzDG law came into effect in 2018. And there have been few appeals to restore the content taken down, which she believes showed there was no threat to free speech.

But no one is checking that Facebook is removing the right speech — most content is being taken down under Facebook’s community standards, said Alexander Fanta of the Berlin-based internet freedom group Netzpolitik.

“We must assume that a lot of content is being removed that could be a freedom of speech violation,” Fanta said. “You have no way of legal recourse [when] your free speech is being curtailed by platforms under terms of service.”

The French officials working on the law are adamant that their proposal does not hand the job of judges over to the social media companies. But current French law means that the platforms may have a very difficult time deciding what is considered “manifestly illegal.” And they’ll be asked to implement this law at a time when there’s also a fierce debate about what anti-Semitism actually is.

President Macron recently said he believes “anti-Zionism is one of the modern forms of anti-Semitism.” What that means is far from clear, however. Justice Ministry spokesperson Youssef Badr said that French law already forbids certain kinds of criticism of Israel as a form of hate speech, citing a 2015 court ruling that punished a group calling for a boycott of Israeli products. The case involved the group BDS France (BDS stands for Boycott, Divestment, Sanctions), part of a growing international movement that says it wants to use economic pressure to fight for the rights of Palestinians, but that some critics accuse of being anti-Semitic.

The French law might therefore have a big change on talking about Israel on Facebook.

Calling for boycotts of Israeli products is not currently a violation of Facebook’s community standards, Facebook’s Allan told BuzzFeed News. If the company’s lawyers agreed with the Ministry of Justice's interpretation of the existing law, they might have to start removing that content in France.

Allan said Facebook remains “uncomfortable” with the framework of NetzDG for the same reason as free speech advocates.

“It transfers the responsibility to private companies to make what can often be very complex legal decisions ... We don’t think that is, in principle, a rational approach,” he said. “Most of the hate speech we do [remove] on the platform is under the community standards track. We expect that to continue. We think that’s the only workable system at scale.”

But some industry observers believe Facebook may see a silver lining to laws like these. The growing number of content requirements is actually helping turn giants like Facebook into unchallengeable monopolies. Content moderation takes a lot of resources — human reviewers, programmers, and lawyers — and it may be harder for smaller players to keep up if the laws are applied across the board.

And more content moderation requirements are coming in Europe — fast.

The UK is also about to take up its own law targeting hate speech and other harmful content. And the EU is on the verge of adopting a sweeping rule formally known as the Copyright Directive, which its opponents have nicknamed the “meme ban” because they worry the platforms will block any image that users post from movies or TV shows because it will be too hard for companies to evaluate if they violate copyright rules.

Most alarming for free speech activists may be an EU proposal to require platforms to take down “terrorist content” within one hour. As drafted, the regulation would require platforms to take down terrorist content within one hour of being notified by any “competent” government body, and it doesn’t have to be a judge. Under such a rule, what would stop an increasingly authoritarian government like Hungary, which has already accused George Soros and EU politicians of making Europe vulnerable to terrorism by supporting refugee rights, from trying to ban all posts criticizing its anti-immigrant policies?

Each new regulation like this advances a massive power shift from courts to tech companies, free speech advocates worry. The platforms may fight specific regulations — sometimes aggressively — but critics say they actually leave them more powerful in the end. And as the number of rules continue to grow, the companies will push to automate censorship as much as possible.

“There is this extremely worrying tendency by politicians to try to delegate very complex laws on the internet to private companies,” said Julia Reda, a member of the European Union’s Parliament from Germany, who is a leading critic of content moderation regulation proposals now pending before the EU.

She continued, “I think the largest players on the platforms market actually have an interest in this being passed because it puts all the power in their hands.” ●


Topics in this article

Skip to footer