The European Union, along with some of the most influential communication platforms in the world, unveiled a “code of conduct” Tuesday, designed to combat hate speech and violence as terrorist organizations continue to exploit wildly popular social media tools. But critics of the policy fear it leaves open the possibility for government censorship carried out in the name of counterterrorism.
Facebook, Twitter, YouTube, and Microsoft were involved in the creation of the policy, with each company pledging to remove the majority of abusive posts and videos within 24 hours of being notified.
U.S. technology firms have come under increased pressure to aggressively police the content hosted on their platforms. Social media companies including Facebook and Twitter have faced heightened scrutiny following the terror attacks in Paris, Brussels, and San Bernardino, as law enforcement officials point to these communication hubs as new-age recruitment centers and gateways to radicalization.
"The recent terror attacks have reminded us of the urgent need to address illegal online hate speech,” said Vera Jourová, the EU’s commissioner for justice. “Social media is unfortunately one of the tools that terrorist groups use to radicalise young people.”
The code of conduct states that companies will raise awareness with customers about “the types of content not permitted under their rules,” an issue that has plagued social media networks, which have struggled in recent years to address complaints of abuse posed by women and minority groups. The policy also encourages web companies to continue “identifying and promoting independent counter-narratives,” to racism and xenophobia.
“This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected,” Jourová said.
But some critics in Europe see the policy as a dangerous step toward censorship, with the whims of tech companies and their terms of service dictating what can and cannot be said online. European Digital Rights, an association of civil liberties and human rights groups, described the code of conduct as “ill considered,” and said that civil society groups were “systematically excluded” from the EU’s negotiations on the plan.
The code of conduct “creates serious risks for freedom of expression as legal but controversial content may well be deleted as a result of this voluntary and unaccountable take down mechanism,” according to the group.
The EU’s policy defines hate speech as “all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, color, religion, descent or national or ethnic origin.”
Elizabeth Goitein, the co-director of the liberty and national security program at the Brennan Center for Justice at New York University School of Law, told BuzzFeed News that, in the U.S. context, the Constitution would limit this kind of policy, since the First Amendment grants only narrow exceptions to freedom of speech.
“One person’s hate speech is another person’s political grievance,” Goitein said, describing the risks attached to government-supported censorship, which may suppress contentious but legally protected discussion. “In an atmosphere of fear of terrorist attacks, prohibitions against inciting hatred are going to be used to shut down controversial speech,” she said.
Beyond the legal gray area, Goitein also cast doubt on the efficacy of the code of conduct. “There is no reason to think that policing speech is an effective way of getting at the problem of violence,” she said.
When platforms are made responsible for determining what speech is illegal, those intermediaries tend to over-remove content, out of an abundance of caution, Daphne Keller, the director of intermediary liability at the Stanford Center for Internet and Society, and a former associate general counsel at Google, told BuzzFeed News. “They take down perfectly legal content out of concern that otherwise they themselves could get in trouble,” Keller said. "Moving that determination out of the court system, out of the public eye, and into the hands of private companies is pretty much a recipe for legal content getting deleted.”
Last year, when California Sen. Dianne Feinstein proposed that tech companies be required to report “terrorist activity” on their networks to federal law enforcement, critics rebuffed her plan. Sen. Ron Wyden, a vocal opponent of the surveillance state, argued that the proposal would enlist private businesses as government informants, armed with the undue power to determine acceptable speech.
But even as more aggressive speech-policing policies are met with resistance in the U.S., Silicon Valley appears eager to cooperate with the Obama administration to counter terrorist propaganda online. In January, White House officials met with tech industry heavyweights including Apple CEO Tim Cook and Facebook Chief Operating Officer Sheryl Sandberg to discuss ways to promote anti-ISIS web content and neutralize the radicalization and recruitment efforts of extremist groups.
Twitter, in a widely read blog post published in February, announced that it had suspended 125,000 ISIS-related accounts since the middle of last year, emphasizing its commitment to limiting the spread of ISIS propaganda. But in the same post, the social messaging company acknowledged the difficulty in running a global platform that strives to be both open and free: “There is no ‘magic algorithm’ for identifying terrorist content on the internet.”