When actor Rose McGowan doxxed someone by tweeting a private phone number last week, Twitter acted quickly to restrict her account until she deleted the tweet, which was in violation of the platform’s terms of service. But a BuzzFeed News analysis of thousands of tweets in the same time frame, as well as thousands more a week later, shows that Twitter’s enforcement of doxxing bans is inconsistent at best.
Although the company was swift to crack down on McGowan’s account as she was discussing film producer Harvey Weinstein’s alleged sexual misconduct, it’s slower to act on less prominent users who break the same rule, which prohibits publicly tweeting someone’s private phone number.
Using Twitter’s Search API, we collected 10,000 tweets between Oct. 9 and Oct. 13 and found five that included people’s private phone numbers. All are still up. We also used Twitter’s Streaming API to collect tweets for eight hours on Oct. 14, and for nine hours on Oct. 16. Over that time period, we found 32 tweets containing phone numbers belonging to people other than the tweeter. On average, that’s roughly two private phone numbers per hour, and about 45 per day. Of these tweets, only five had been deleted by Oct. 17, and only six had been deleted by Oct. 19.
The examples we collected are just a small sample of tweets that violate Twitter’s rules but still slip past the company’s enforcement tools. While BuzzFeed News focused its search on tweets containing private phone numbers, quick searches turned up tweets containing additional personally identifiable information for people other than the tweeter, including addresses and email addresses. Often, this information is posted on Twitter along with explicit calls to doxx the targets.
When BuzzFeed News reached out to three Twitter users whose numbers had been made public, two said they hadn’t reported the tweets because they wanted to give the doxxer time to take it down themselves, and the third said they hadn’t seen the tweets yet. We reported three doxxing tweets containing private numbers to Twitter on Monday, Oct. 16. As of this article’s publication, two remain online.
We asked Twitter how it responds when users post other people’s private information on its platform. A Twitter spokesperson stated, “We are constantly looking for opportunities to use Machine Learning to help make Twitter safer and will continue to leverage more ML/AI to improve the detection of content that violates our terms of service. As we announced last week, we are taking a more aggressive stance with our rules and how we enforce them. We're moving quickly to make these updates and we will share more soon.”
Twitter has struggled with harassment for a decade, and it has long relied on algorithms and automated systems to enforce its rules. But the company and the algorithms it relies on have a tendency to overlook abuse on its platform, and Twitter often takes action only when the media publicly calls out an issue, or when prominent figures like Leslie Jones or McGowan are involved.
Over the past year, as Twitter has faced increasing pressure from the public to quash abuse on its platform, it has rolled out a series of harassment-combating tools and efforts, including more ways to report misconduct, keyword filters, muting abilities, and timeline tweaks that bury abusive tweets. Still, detecting abuse is not the same as effectively stopping it. Reporting by BuzzFeed News in the past year has uncovered hundreds of examples of harassment on Twitter; in many cases, when victims report the abusive tweets, Twitter dismisses the reports because it doesn’t consider them to be in violation of Twitter’s rules.
But the company seems to be doubling down on these tools, recently telling BuzzFeed News that it’s “focusing more on improving its abuse-filtering algorithms rather than hiring more humans.” And this week, Twitter CEO Jack Dorsey publicly shared the company’s internal safety work streams and shipping calendar in an attempt to be more transparent with users about Twitter’s push to root out bad behavior.
In a blog post on Thursday, the company wrote, “We’re updating our approach to make Twitter a safer place. This won’t be a quick or easy fix, but we’re committed to getting it right. Far too often in the past we’ve said we’d do better and promised transparency but have fallen short in our efforts. Starting today, you can expect regular, real-time updates about our progress.” Upcoming efforts include plans to immediately suspend accounts that post nonconsensual nude images and videos, an updated account suspension process, and bans on accounts that promote violence.