It's 2019 And Twitter's Moderation Team Is Still Struggling With Swastika Photoshops

How is it that moderators tasked with parsing abusive behavior miss a poorly photoshopped image of an infant with a bright red swastika on their forehead?

It was around 10 a.m. Wednesday when Tablet magazine senior writer Yair Rosenberg noticed the swastika-laden photoshop of a baby. The disturbing image was part of a Twitter account claiming to be his infant child (it was not). "Account for my son Yair Jr controlled by @yair_rosenberg," the account's bio read, which also included Rosenberg's full name in the handle.

No stranger to Twitter threats and abuse, Rosenberg — who has written extensively about neo-Nazis and online trolls — quickly reported the account, which was in clear violation of Twitter's policies on abusive behavior, hateful conduct, and impersonation. Just 36 minutes later, Rosenberg received a familiar, dispiriting form email from Twitter: "We reviewed your report carefully and found that there was no violation of the Twitter rules against abusive behavior."

As has become custom when Twitter's moderation team fail to do its job, Rosenberg tweeted about the disturbing photo and added a screenshot of Twitter's abuse report rejection. The tweet went viral. Within an hour, Twitter reversed its decision and took down the account.

It's unclear why such an egregious violation of Twitter's rules was dismissed when Rosenberg reported it; Twitter has not yet responded to a request for comment. Rosenberg, who deals with frequent harassment, suggested it was likely an oversight. "I do not actually think this represents 'Twitter policy,'" he tweeted Wednesday morning. "I do think it shows how poorly informed their moderators are about basic abuse issues on the platform, such that they can miss obvious instances like this."

Starting in 2017, Twitter has devoted considerable resources to trying to curb its rampant abuse and harassment problems. It has written a slew of new rules, expanded its options for reporting violating behavior, and removed violating accounts at a higher frequency. Many people — including Rosenberg — have suggested Twitter has gotten a better handle on harassment. And still, two years after the site redoubled its efforts, a concerning number of reports of clear-cut harassment still seem to slip through the cracks.

Despite Twitter's countless calls to increase transparency, its abuse report infrastructure remains opaque and sometimes confounding. Online harassment can be tricky to parse — there are issues of language, cultural norms vary, and moderators who have only a few moments to weigh in on a report may miss more subtle examples of harassment (which is precisely why culturally specific training remains crucial). But Twitter's most publicized examples of dismissed reports often aren't shades of gray, but clear black-and-white issues. And this case's dismissal raises the question: In 2019, if a crudely photoshopped image of an infant with a bright red swastika plastered on their forehead isn't an open-and-shut case, what is?

Rosenberg's tweet went viral quickly not just because it was an egregious example of abuse but because it is such a common occurrence. On Twitter, the dismissed abuse report has become its own trope — an outrage meme of sorts that signals the deep frustration that somehow, this keeps happening.

Like when these 70 rape threats against a programmer were dismissed.

Or when 2,700 Twitter users told BuzzFeed News about their struggles with abuse on the social network.

Or when 90% of respondents to a BuzzFeed News survey in 2016 said Twitter didn't do anything when they reported abuse.

Or when this ISIS beheading photo didn't qualify as abuse.

Or when Twitter didn't initially block attempts to disenfranchise voters on its service in 2016.

Or when it only blocked these false voting information claims after reports from BuzzFeed News.

Or when it allowed a promoted tweet from a Nazi website.

Or when Twitter suspended a woman's account after she tweeted the anti-Semitic images trolls had sent her.

There were also the confusing suspensions and reinstatements of white supremacists David Duke and Richard Spencer.

Or these 89 instances in 2017 of users alleging that they received at least one improper dismissal of their harassment claim.

Or when Twitter restricted actor Rose McGowan's account instead of just deleting one tweet.

Or when it had to pause its verification system after its decision to verify a white supremacist who organized the "Unite the Right" rally in Charlottesville.

And the list goes on...unless you're a bitcoin scammer, then it's OK!

Topics in this article

Skip to footer