NEW DELHI — A tweet calling for Muslims and journalists to be lined up and shot stayed on Twitter for nearly a day before the platform permanently suspended the account that tweeted it.
The tweet, which falsely blamed Muslims for spreading the coronavirus in India, was posted from the verified account of Rangoli Chandel, manager and sister of popular Bollywood actor Kangana Ranaut, on Wednesday morning.
“f***k the history they may call us nazis who cares,” the tweet said. The tweet was retweeted more than 2,000 times and received over 8,000 likes.
“At Twitter no one is above our rules,” a Twitter spokesperson said in a statement to BuzzFeed News. “The referenced account was permanently suspended for repeated violations of the Twitter Rules including Hateful Conduct Policy. If we receive reports of potential rule violations, we will take action, as appropriate and pursuant with our enforcement approach, as detailed here.”
Chandel, who had nearly 100,000 followers on the platform before her account was taken down, is popular among the Indian far right for her controversial opinions that often target the country’s minorities. Earlier this week, she suggested that India scrap its next general elections scheduled to take place in 2024 and let the country’s Hindu nationalist Prime Minister Narendra Modi continue for another term in office unopposed.
Social media companies have been struggling to contain abuse and misinformation on their platforms ever since the coronavirus outbreak started sweeping the planet. In countries like India, issues with content moderation have amplified a wave of anti-Muslim sentiment ever since a cluster of COVID-19 cases were traced back to a religious gathering of Muslims in New Delhi last month.
If you're someone who is seeing the impact of the coronavirus firsthand, we’d like to hear from you. Reach out to us via one of our tip line channels.
Most social media companies use a combination of technology like artificial intelligence and machine learning along with human oversight to keep content that violates their policies off their platforms. But the pandemic has forced companies to increasingly rely on automation as opposed to humans to moderate their platforms.
People who report tweets and accounts, for instance, are now shown this message:
In a blog post published last month, Twitter’s policy head Vijaya Gadde and customers lead Matt Derella detailed the company’s steps about moderating its platform during the pandemic and said it would take “longer than normal” for the company to get back to people about accounts and tweets that they have reported. “We appreciate your patience as we continue to make adjustments,” the post said.
Facebook, the world’s largest social network used by more than 2 billion people around the world, said in a blog post last month that its moderation capabilities would take a hit.
“With fewer people available for human review we’ll continue to prioritize imminent harm and increase our reliance on proactive detection in other areas to remove violating content,” the blog post said. “We don’t expect this to impact people using our platform in any noticeable way. That said, there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result.”