Tech Platforms Obliterated ISIS Online. They Could Use The Same Tools On White Nationalism.

Christchurch could be the moment Silicon Valley decides to finally treat white nationalism the way it's been treating ISIS for years.

Before killing 50 people during Friday prayers at two mosques in Christchurch, New Zealand, and injuring 40 more, the gunman apparently decided to fully exploit social media by releasing a manifesto, posting a Twitter thread showing off his weapons, and going live on Facebook as he launched the attack.

The gunman’s coordinated social media strategy wasn’t unique, though. The way he manipulated social media for maximum impact is almost identical to how ISIS, at its peak, was using those very same platforms.

While most mainstream social networks have become aggressive about removing pro-ISIS content from the average user’s feed, far-right extremism and white nationalism continue to thrive. Only the most egregious nodes in the radicalization network have been removed from every platform. The question now is: Will Christchurch change anything?

A 2016 study by George Washington University’s Program on Extremism shows that white nationalists and neo-Nazi supporters had a much larger impact on Twitter than ISIS members and supporters at the time. When looking at about 4,000 accounts of each category, white nationalists and neo-Nazis outperformed ISIS in number of tweets and followers, with an average follower count that was 22 times greater than ISIS-affiliated Twitter accounts. The study concluded that by 2016, ISIS had become a target of “large-scale efforts" by Twitter to drive supporters off the platform, like using AI-based technology to automatically flag militant Muslim extremist content, while white nationalists and neo-Nazi supporters were given much more leeway, in large part because their networks were far less cohesive.

Google and Facebook have also invested heavily in AI-based programs that scan their platforms for ISIS activity. Google’s parent company created a program called the Redirect Method that uses AdWords and YouTube video content to target kids at risk of radicalization. Facebook said it used a combination of artificial intelligence and machine learning to remove more than 3 million pieces of ISIS and al-Qaeda propaganda in the third quarter of 2018.

These AI tools appear to be working. The pages and groups of ISIS members and supporters have almost been completely scrubbed from Facebook. Beheading videos are pulled down from YouTube within hours. The terror group’s formerly vast network of Twitter accounts have been almost completely erased. Even the slick propaganda videos, once broadcast on multiple platforms within minutes of publication, have been relegated to private groups on apps like Telegram and WhatsApp.

The Christchurch attack is the first big instance of white nationalist extremism being treated — across these three big online platforms — with the same severity as pro-ISIS content. Facebook announced 1.5 million versions of the Christchurch livestream were removed from the platform within the first 24 hours. YouTube said in a statement that "Shocking, violent and graphic content has no place on our platforms, and is removed as soon as we become aware of it," though the video does continue to appear on the site — a copy of it was being uploaded every second in the first 24 hours. Twitter also said it had taken down the account of the suspected gunman and was working to remove all versions of the video.

The answer to why this kind of cross-network deplatforming hasn’t happened with white nationalist extremism may be found in a 2018 VOX-Pol report authored by the same researcher as the George Washington University study cited above: “The task of crafting a response to the alt-right is considerably more complex and fraught with landmines, largely as a result of the movement’s inherently political nature and its proximity to political power.”

But Silicon Valley’s road to accepting that a group like ISIS could use its technology to radicalize, recruit, and terrorize was a long one. After years of denial and dragging their feet, it was the beheading death of American journalist James Foley, quickly followed by videos of the deaths of other foreign journalists and a British aid worker, and the viral chaos that followed that finally forced tech companies to take the moderation of ISIS seriously. The US and other governments also began putting pressure on Silicon Valley to finally start moderating terror. Tech companies formed joint task forces to share information, working in conjunction with governments and the United Nations and establishing more robust information-sharing systems.

But tech companies and governments can easily agree on removing violent terrorist content; they’ve been less inclined to do this with white nationalist content, which cloaks itself in free speech arguments and which a new wave of populist world leaders are loath to criticize. Christchurch could be another moment for platforms to draw a line in the sand between what is and is not acceptable on their platforms.

Moderating white nationalist extremism is hard because it’s drenched in irony and largely spread online via memes, obscure symbols, and references. The Christchurch gunman ironically told the viewers of his livestream to “Subscribe to Pewdiepie.” His alleged announcement post on 8chan was full of trolly dark web in-jokes. And the cover of his manifesto had a Sonnenrad on it — a sunwheel symbol commonly used by neo-Nazis.

And unlike ISIS, far-right extremism isn’t as centralized. The Christchurch gunman and Christopher Hasson, the white nationalist Coast Guard officer who was arrested last month for allegedly plotting to assassinate politicians and media figures and carry out large-scale terror attacks using biological weapons, were both inspired by Norwegian terrorist Anders Breivik. Cesar Sayoc, also known as the “MAGA Bomber,” and the Tree of Life synagogue shooter, both appear to have been partially radicalized via 4chan and Facebook memes.

It may now be genuinely impossible to disentangle anti-Muslim hate speech on Facebook and YouTube from the more coordinated racist 4chan meme pages or white nationalist communities growing on these platforms. “Islamophobia happens to be something that made these companies lots and lots of money,” Whitney Phillips, an assistant professor at Syracuse University whose research includes online harassment, recently told BuzzFeed News. She said this type of content leads to engagement, which keeps people using the platform, which generates ad revenue.

YouTube has community guidelines that prohibit all content that encourages or condones violence to achieve ideological goals. For foreign terrorist organizations such as ISIS, it works with law enforcement internet referral units like Europol to ensure the quick removal of terrorist content from the platform. When asked to comment specifically on whether neo-Nazi or white nationalist video content was moderated in a similar fashion to foreign terrorist organizations, a spokesperson told BuzzFeed News that hate speech and content that promotes violence have no place on the platform.

“Over the last few years we have heavily invested in human review teams and smart technology that helps us quickly detect, review, and remove this type of content. We have thousands of people around the world who review and counter abuse of our platforms and we encourage users to flag any videos that they believe violate our guidelines,” the spokesperson said.

A spokesperson from Twitter provided BuzzFeed News with a copy of its policy on extremism, in regards to how it moderates ISIS-related content. “You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people,” the policy reads. “This includes, but is not limited to, threatening or promoting terrorism.” The spokesperson would not comment specifically on whether using neo-Nazi or white nationalist iconography on Twitter also counted as threatening or promoting terrorism.

Facebook did not respond to a request for comment on whether white nationalism and neo-Nazism are moderated using the same image matching and language understanding that the platform uses to police ISIS-related content.

Like the hardcore white nationalist and neo-Nazi iconography used by the Christchurch gunman, the more entry-level memes that likely radicalized the MAGA bomber, and the pipeline from mainstream social networks to more private clusters of extremist thought described by the Tree of Life shooter, ISIS’s social media activity before the large-scale crackdown in 2015 had similar tentpoles. It organized around hashtags, distributed propaganda in multiple languages, transmitted coded language and iconography, and siphoned possible recruits from larger mainstream social networks into smaller private messaging platforms.

Its members and supporters were able to post official propaganda materials across platforms with relatively few immediate repercussions. A 2015 analysis of the group’s social media activity found that ISIS released an average of 38 propaganda items a day — most of which did not contain graphic material or content that specifically violated these platforms’ terms of service at the time.

ISIS’s use of Twitter hashtags to effectively spread material in multiple languages went relatively unpoliced for years, as did their use of sharing propaganda material in popular trending tags, in what is known as “hashtag spamming.” As one of many examples, during the 2014 World Cup, ISIS supporters shared images of Iraqi soldiers being executed using the Arabic World Cup tag. They also tweeted propaganda and threats against the US and then-president Barack Obama into the #Ferguson tag during the protests after the death of Michael Brown.

The accounts that were not caught by outsiders for sharing graphic or threatening content often went undetected due to the insulated nature of the communities and the number of languages employed by ISIS members. Also, the group regularly employed coded language, much of which is rooted in a fundamentalist interpretation of the Qur’an and can be difficult for non-Muslims to interpret. As one example, fighters killed in battle or killed carrying out terrorist attacks were referred to as “green birds,” referencing the belief that martyrs of Islam are carried to heaven in the hearts of green birds.

ISIS’s digital free-for-all started to end on Aug. 19, 2014. A YouTube account that claimed to be the official channel for the so-called Islamic State uploaded a video titled “A Message to America.” The video opened with a clip of Obama announcing airstrikes against ISIS forces in Syria and then cut away to a masked ISIS member standing next to Foley, kneeling on the ground wearing an orange jumpsuit. Foley had been captured by insurgent forces while covering the Syrian Civil War in November 2012. The 4-minute, 40-second video showed his execution by beheading and then a shot of his decapitated head atop his body.

Within minutes of the Foley video being uploaded to YouTube, it started spreading across social media. #ISIS, #JamesFoley, and #IslamicState started trending on Twitter. Users started the #ISISMediaBlackout, urging people not to share the video or screenshots from it.

Then a ripple effect — similar to Alex Jones being deplatformed last year — began. In Jones’ case, first he was kicked off Apple’s iTunes and Podcast apps, then YouTube and Facebook removed him from their platforms, then Twitter, and finally his app was removed from Apple’s App Store.

In 2014, it was YouTube that was the first platform to pull down the James Foley video for violating the site’s policy against videos that “promote terrorism.”

“YouTube has clear policies that prohibit content like gratuitous violence, hate speech and incitement to commit violent acts, and we remove videos violating these policies when flagged by our users,” the company said in a statement at the time. “We also terminate any account registered by a member of a designated foreign terrorist organisation and used in an official capacity to further its interests.”

Then Dick Costolo, then the CEO of Twitter, followed YouTube’s lead, tweeting, “We have been and are actively suspending accounts as we discover them related to this graphic imagery. Thank you.” Then Twitter went a step further, agreeing to remove screenshots of the video from its platform.

Foley’s execution also forced Facebook to become more aggressive about moderating terror-related content across its family of apps.

It wasn’t just tech companies that came out against the distribution of the Foley execution video. There was a concerted push from the Obama administration to work with tech companies to eliminate ISIS from mainstream social networks. After years of government-facilitated discussions, the Global Internet Forum to Counter Terrorism was formed by YouTube, Facebook, Microsoft, and Twitter in 2017. DHS Secretary Kirstjen Nielsen has repeatedly highlighted the department’s anti-ISIS collaboration with the GIFCT as one of the key ways the Trump administration is combating terrorism on the internet.

In a certain sense, there is a similar movement online to #ISISMediaBlackout and a genuine pushback against using the name or sharing pictures of the Christchurch gunman. The House Judiciary Committee announced that it will hold a hearing this month on the rise of white nationalism and has invited the heads of all the major tech platforms to testify. New Zealand Prime Minister Jacinda Ardern has vowed to never say the name of the alleged gunman, and continues to call on social media platforms to take more responsibility for the dissemination of his video and manifesto.

But we are a long way away from global joint task forces focusing specifically on the spread of white nationalism. To some extent, the Trump administration has continued with the precedent set by its predecessor. But as outlined in the Trump White House’s October 2018 official national strategy for counterterrorism, the administration’s online efforts are solely focused on terrorist ideology rooted in “radical Islamist terrorism.” And President Trump has publicly downplayed the role of white nationalism in last week’s attacks and said that he doesn’t view far-right extremism as a growing threat in the US. "I think it's a small group of people that have very, very serious problems, I guess,” the president said.

Some major tech companies are beginning to crack down on specific instances of white nationalist content, but that won’t eliminate it from the internet altogether. On Thursday, the GIFCT released a statement that its members were sharing information with one another to remove the Christchurch video in the wake of the attacks, but did not respond to a request for comment from BuzzFeed News about if the group would be taking specific steps to combat white nationalist and neo-Nazi content.

As we’ve already seen, new websites and platforms like Gab will spring up. Toxic message board Kiwi Farms is currently refusing to hand over posts and video links uploaded to the site by the Christchurch gunman.

While ISIS’s deplatforming has dramatically halted the terror group’s ability to get its message out, it hasn’t been completely eliminated from the internet either. Propaganda videos are still uploaded to file-sharing platforms and distributed among supporters. Archive.org, in particular, is rife with ISIS content. But it’s now far harder to stumble upon ISIS content; it’s harder for influencers to maintain their presence long enough to attract a following or form relationships with potential recruits.

When social media platforms cracked down on ISIS, they were cracking down not just on members of the group but on supporters who espoused its ideology — the establishment of a caliphate and the implementation of its radical agenda. Although the proclaimed center of ISIS’s mission is Islam, it was and is a corrupted version of the faith and one that the vast majority of Muslims worldwide have risen up to condemn.

While there is a distinct overlap between those who espouse white nationalist ideology and far-right political parties in countries across the world, the two are not the same. There is a clear line between political thought and the practice of a faith — even if you vehemently disagree with the politics or tenets of that faith — and an ideology that requires subjugating — or murdering — whole groups of people.

Skip to footer