Facebook Failed To Delete 93% Of Posts Containing Speech Violating Its Own Rules In India

A yearlong study concluded that Facebook had made little progress in moderating non-English-language content in India, the company’s largest market by users.

TUNIS, Tunisia — Facebook failed to delete hundreds of memes, images, and posts targeting caste, LGBT, and religious minorities in India that human rights researchers reported over a yearlong period.

Facebook has come under heavy scrutiny in the US and Europe for the way in which it has handled political misinformation and user privacy. But elsewhere in the world, it has faced even tougher criticism for doing too little to moderate non-English-language content that has demonized minority groups and, in many cases, fanned the flames of communal violence. In Myanmar, Facebook admitted shortcomings — as it has elsewhere — after its policies were cited for exacerbating ethnic cleansing, and promised to reform its processes, including by hiring more content moderators.

But Equality Labs, a South Asian American advocacy group focusing on technology and human rights, said Facebook had made little progress on these issues in India — home to some 300 million Facebook users — including during India’s 2019 general election. The report’s authors, who studied 1,000 posts over the past year, stated that widespread doxxing of activists and journalists takes place on the platform.

“Without urgent intervention, we fear we will see hate speech weaponized into a trigger for large-scale communal violence,” the report, being launched here at the RightsCon conference, says. “After a year of advocacy with Facebook, we are deeply concerned that there has been little to no response from the company.”

In a statement, a spokesperson for Facebook said the company respects and seeks to protect the rights of marginalized communities in India and elsewhere, and pointed to its rules against hate speech.

“We take this extremely seriously and remove this content as soon as we become aware of it,” the spokesperson said. “To do this, we have invested in staff in India, including content reviewers, with local language capabilities and an understanding of the country’s longstanding historical and social tensions.” The company has made “significant progress” in proactively detecting hate speech on its platform before it’s reported, the spokesperson added.

At the heart of the issue is Facebook’s approach to policing problematic content on its site, especially targeted harassment and calls for violence against minority groups. Civil society groups have repeatedly called for the company to invest more in hiring moderators proficient in local languages and to be more transparent in its process. Despite months of criticism, activists say it is still clunky and difficult to report problematic content on Facebook, and it’s usually unclear why some posts are deleted and others left up.

India is the largest market for Facebook in the world by number of users, and the social network serves as a primary source of news and information for many there.

The report highlights a meme featuring Pepe the Frog depicted as a Hindu nationalist and standing approvingly in front of a centuries-old mosque demolished by a Hindu mob in 1992, as well as posts containing anti-Muslim and anti-Dalit slurs. Dalits are at the bottom of the Hindu caste system and face heavy discrimination in India despite laws intended to protect their rights. Another post, on an Indian meme-swapping Facebook group, called a baseball bat an “educational tool” for wives.

Still more posts demonized Rohingya Muslims — the minority group that has been targeted in Myanmar.

Equality Labs found that 93% of the posts it reported to Facebook that contained speech violating Facebook’s own rules remained on the platform.

Facebook said that it proactively deleted almost all problematic content in areas identified in the report before anyone reported the content, but hate speech is tougher to recognize because of linguistic and cultural context. “But we’re making good progress,” the company said.

Equality Labs is calling for an independent audit of Facebook’s impact on human rights in India, in the same vein as a civil rights audit the company took on last year in the United States. Asked whether the company would be open to such an audit in India, Facebook said it regularly conducts human rights due diligence and, when problems are found or when new products and features are launched, more in-depth human rights assessments. Equality Labs noted it is calling for an outside audit — like what was carried out in the US — rather than an internal assessment done by Facebook’s own staff.

Facebook said it has tripled the number of people it has working on safety and security issues to 30,000 globally, including 15,000 people reviewing content. But the company did not directly respond to a question asking how many content reviewers are focused on India in particular or proficient in Indian languages, saying its team supports the majority of official languages in India. In addition to Hindi and English, there are 22 officially recognized languages in India, almost all of which have millions of speakers.

Equality Labs also says Facebook’s staff lacks the diversity that would enable it to moderate hate speech targeting minority groups.

“We have a right to know what the numbers are, what’s the linguistic breakdown, what’s the caste and religious diversity,” said Thenmozhi Soundararajan, executive director of Equality Labs, in an interview. “Facebook not including the ability to report casteist hate speech is just so negligent,” she added, pointing out that caste minority groups in India and overseas include some 300 million people.

Equality Labs found that it took a median time of 48 hours for Facebook to respond to a reported post — a lengthy time, Soundararajan said, considering that posts containing targeted attacks can lead to real-world violence. Facebook said the company’s goal is to review and act on reports within 24 hours.

“Facebook must act to mitigate specific risks, and get more aggressive about the platform’s impact on the physical safety and fundamental human rights of India’s most vulnerable communities,” the report says.


Topics in this article

Skip to footer