The Anti-Defamation League Has New Demands For Twitter And Facebook

After documenting a "significant uptick" in anti-Semitic harassment toward journalists, the ADL is calling for better tools and more transparency.

This week, Twitter began taking steps to address the harassment problem that has plagued its platform for years. On Tuesday it rolled out new anti-abuse tools, including an expanded mute feature and a more streamlined system of abuse reporting; later in the day, it controversially banned a number of notable alt-right accounts from the service.

But as Twitter navigates the precarious territory of trying to police its platform while holding true to its free speech roots, one of the country's best-known civil rights groups is pushing for it to do more. Today, roughly one month after its initial report on the rise of anti-Semitism on Twitter, the Anti-Defamation League (ADL) published its recommendations for addressing internet harassment. Its conclusion: Social platforms need better harassment reporting options; they need to improve their response times to those reports; they need to invest in new tools to curb harassment; and they need to be more transparent about their abuse and harassment review processes.

Last month's ADL report found that between August 2015 and July 2016 roughly 2.6 million anti-Semitic tweets were broadcast on Twitter, creating more than 10 billion impressions across the web. Of those tweets, 19,253 were directed at journalists. Among its concerns, the report suggested hate speech targeting journalists was creating a chilling effect that could hurt their freedom to report and investigate.

“We’re already seeing this spread into the real world and mainstreaming in a way we’ve never seen in our over–100-year history as an organization,” ADL CEO Jonathan Greenblatt told BuzzFeed News.

For social media platforms — the ADL singles out Twitter in particular — the new report says the mechanisms for reporting must be more efficient and clear for users. This includes cultural context training to allow reviewers to keep up with the ever-changing tactics of trolls, and better reporting systems that allow users to flag offensive content once, rather than every time it pops up. To Twitter's credit, it tackled both these issues in its most recent anti-abuse update, though critics still feel the mute filter is primitive and mostly cosmetic.

The report also recommends an appeals process that would allow users to protest a denied hate speech or harassment report. Such a feature would be welcomed: In September BuzzFeed News surveyed over 2,700 users, with 90% of respondents saying Twitter didn’t do anything when they reported abuse.

The ADL report says platforms like Twitter invest in research and innovation and stresses the need for "natural language processing and machine learning" that strictly police and flag language believed to violate terms of service. Failing that, the ADL suggests the platforms, most notably Twitter, "privilege" verified identities. This approach, which has been floated before, would allow Twitter to expand verification and create a two-tiered system of verified accounts with accountability alongside a more lawless, most likely anonymous, group of accounts.

Listed throughout the report is a call for more transparency from platforms, including better explanation of internal abuse review processes and guidelines for moderation "so they can better participate in the system." Twitter, for example, has come under sharp criticism for its opaque reporting processes and policies, and while the company has clear and specific rules, users complain they are haphazardly enforced.

The company has been criticized for being slow to respond to cases unless they go viral or are flagged by celebrities, public figures, or journalists. And in an alarming number of cases, reports of clear rules violations are met with responses from Twitter suggesting the reported tweet or account was not in violation. This week, when Twitter user Ariana Lenarsky found what appeared to be promoted tweet in her timeline from a prominent neo-Nazi account with the hashtag #WhiteAmerica, Twitter banned the account but refused to clarify whether the tweet was paid for by the account, suggesting instead that it may have been "photoshopped." The confusion surrounding the promoted tweet, as well as Twitter’s reluctance to clarify the event, only complicates reporting processes for users. And as Twitter potentially begins the fraught process of banning accounts associated with a particular political movement (in this case the alt-right as well as white nationalists), a lack of transparency may only serve to stoke the fires of hate speech and abuse on the platform.

For lawmakers, the ADL report suggests more research and new laws covering emergent forms of cyberabuse such as "doxxing" (the unwanted publishing of personal information in public) and "swatting" (calling police to a residence on behalf of someone else when there is no problem). The report calls on policymakers to establish funding for better training of state and local law enforcement to handle online harassment, and urges funding of a "single national reporting center (along the lines of the Internet Crime Complaint Center)" that could collect and siphon serious reports to the proper authorities.

This second half of the ADL's investigation comes at a pivotal moment in the tech industry's handling of online harassment. Just days into Trump’s victory, alt-right trolls across the internet are gearing up for an ideological war, fought with false information and aggressive harassment on Twitter. "We must do everything we can to ensure that the internet remains a medium of free and open communication for all people," Greenblatt said of the new report. "We look forward to working with the social media platforms, policymakers, and others to implement these recommendations as quickly as possible.”

You can read the full report here.

Skip to footer