The Histories Of Today's Wars Are Being Written On Facebook And YouTube. But What Happens When They Get Taken Down?

Investigators depend on photos and videos posted online from war zones. But takedowns by Facebook and YouTube are putting the war crimes prosecutions of the future at risk.

Dressed in orange jumpsuits, the group of 20 men kneeled in the dirt, hoods covering their faces. Behind them, men wearing black T-shirts and camouflage pants pointed guns at the backs of their heads. Their commander paced beside them.

Then he gave the order to fire.

As recently as a decade ago, a summary execution like this might have been lost in a sea of other wartime atrocities. But this nauseating scene happened to be filmed, and posted on social media.

Like thousands of Facebook and YouTube users, officials at the International Criminal Court, an international tribunal charged with prosecuting war crimes, genocide and crimes against humanity, saw the video and several others others like it, and they issued an arrest warrant for the man who allegedly gave the order to fire, a Libyan military commander named Mahmoud Mustafa Busayf al-Werfalli. It was a landmark moment — the first time in the history of the ICC that images or videos posted on social media had formed the bulk of the evidence cited in a warrant.

In some ways, the development seemed natural. Through a decade of conflict in countries like Syria and Iraq, we’ve almost come to expect videos of militants engaging in brutal acts to surface on social media platforms. But the Libyan conflict, which started in 2011, was the first major war that took place entirely during the social media era, when both militants and ordinary Libyans began documenting the conflict in real time with their cellphones, and posting thousands of photos and videos online. Academics, activists and war crimes investigators were looking on and saw something more than brutality — a windfall of potential evidence.

“We are just now seeing the coming to fruition of cases reflecting atrocities perpetrated in the age of social media,” said Alexa Koenig, executive director of the Human Rights Center at University of California, Berkeley. “Video grabbed from Facebook is at the very heart of the [Werfalli] case. Without that, the case would fall apart.”

It was all the more ironic that in some cases, militants and their associates were incriminating themselves, filming videos of executions and other atrocities as pieces of propaganda.

The way investigators document human rights abuses is undergoing a fundamental shift. Once researchers depended heavily on diaries, physical records, and interviews with witnesses to atrocities that sometimes took place years after the fact. Now, investigators at international bodies like the United Nations and the ICC are also cataloguing and analyzing millions of photos, posts, and videos from social media in an effort to hold human rights abusers accountable in court, working alongside nongovernmental organizations, researchers, and digital detectives. Holding perpetrators of human rights abuses accountable, researchers say, increasingly depends on access to content posted on social media platforms.

But this shift in how war crimes are being investigated comes at the same time that social media companies are facing unprecedented criticism for failing to police their platforms, allowing neo-Nazis and other extremist groups to spread their messages online.

In this debate, companies have been caught between people who want platforms to guard free speech rights and those who say it’s imperative that companies are tough on hate speech — but one thing almost no one wants on social media platforms is content that could promote terrorism. In some countries, the platforms are forbidden by law from hosting this kind of content, and they’ve also been sued in the United States over it. And, because videos of executions and other extremely graphic violence are clear-cut violations of the platforms' rules, they’re low-hanging fruit for the platforms to take down.

“We are committed to ensuring human rights activists and citizen journalists have a voice on YouTube and we are proud of how our service has been used to expose what is happening across the globe,” a spokesperson for YouTube told BuzzFeed News. “We have to carefully balance that commitment with our commitment to keep violent propaganda and incitement to violence off of YouTube. A video of a terrorist attack may be informative news reporting if uploaded by a news outlet or citizen journalist, but that same video clip can be glorification of violence if uploaded in a different context by a different user.”

But the removal of this kind of content is posing a major problem for researchers who are using it for documentation.

Twitter is not as widely used as Facebook or YouTube in much of the world, and because it’s a real-time platform, researchers said it’s often easier to find and preserve content on Twitter versus other social media sites.

What’s clear is that a handful of tech companies in Silicon Valley now hold the keys to a growing treasure trove of evidence posted on social media that is increasingly crucial to building cases, and they have the power to aid — or silence — investigations.

Facebook and YouTube say they are taking a much more proactive approach to removing problematic content, hiring more staff to address this worldwide, particularly those proficient in local languages, and developing algorithms to automatically flag violent posts.

"We've started to use technology to proactively detect potentially violating content," a spokesperson for Facebook told BuzzFeed News. Facebook founder Mark Zuckerberg has called artificial intelligence tools "the scaleable way to identify and root out most of this harmful content.”

With literally billions of users to police, algorithms play an ever greater role, leading to some posts being removed without human involvement. (Twitter is not as widely used as Facebook or YouTube in much of the world, and because it’s a real-time platform, researchers said it’s often easier to find and preserve content on Twitter versus other social media sites.)

We are approaching the end of a decade dominated by horrific violence that has become so normal in some parts of the world that it is routinely documented on Facebook and YouTube. Now victims and their representatives in the Middle East, North Africa, and Southeast Asia are seeking to hold the perpetrators of war crimes accountable with prosecutions and indictments like those that followed atrocities in Yugoslavia and Rwanda.

Months after investigators found the video of the execution allegedly under Werfalli’s command, it vanished from YouTube and Facebook without warning. If it had disappeared earlier, the ICC might never have seen it at all.

Alison Cole, a former ICC investigator, came to digital forensics relatively late in life. Growing up poor in a tiny New Zealand hamlet, she could barely use a computer until she was in college. When she was in grad school, she was tasked with analyzing cases of mass rape and sexual assault that had taken place during the Rwandan genocide. With few sources of information, she ended up spending hours watching b-roll footage filmed by television reporters working in the country. She sat close to the screen, painstakingly examining the images of dead bodies shown on the VHS tapes for signs of sexual assault.

Years later, after Libyan leader Muammar al-Qaddafi fell and hundreds of photos and videos began surfacing on social media, a lightbulb went off.

“It’s horrible to say, but I thought, this conflict is being livestreamed. This is evidence,” she said. The last time video evidence had had that impact on her was those VHS tapes.

Libya was the first major conflict, she realized, that had taken place fully in the social media era. By that time, Cole had already worked for every ongoing international war crimes tribunal. She had good contacts at the ICC, and she and a group of other researchers and technologists began putting together discussions and training sessions with staff at the court, encouraging them to take digital research seriously. It wasn’t always easy — after Cole gave one presentation to UN judicial officials in 2011, a skeptical judge responded, “I still don’t understand what you are doing with the tube.”

But many at the court understood that knowledge of social media research was something worth investing in. Video evidence, which was far less widely available before cellphones became commonplace, could be particularly valuable to ICC prosecutors.

“There’s something very powerful about video evidence,” Julian Nicholls, a senior trial lawyer at the ICC, told BuzzFeed News. “It can’t be cross-examined. It can’t forget. It can’t be bribed. It can’t make a mistake and say the light was red or green. [Once authenticated] it can be very reliable evidence.”

In a strategic plan dated October 2013, the ICC’s prosecutor’s office said it was going to work on improving its ability to carry out investigations on the internet, noting that the court needed alternative forms of evidence from witnesses, who often faced intimidation or pressure. The court also recruited a board of outsiders who advise court staff on new technology.

“There is a big movement to train their staff of all kinds, from judges down to the lowest level, on the basics of this kind of work, so they can understand what an open-source investigation is and what this analysis means,” said Eliot Higgins, the founder of Bellingcat, who sits on the board.

At the UN, the general assembly voted in December 2016 to establish a body to bring accountability for human rights atrocities in Syria by preserving evidence in preparation for future legal action. The body, called the International, Impartial and Independent Mechanism focusing on international crimes committed in Syria, works with civil society groups that are saving millions of photos, videos, and posts from the Syrian civil war.

"The mechanism hopes that policymakers and technology companies will take measures to ensure that their efforts to remove extremist content from social media platforms do not unduly limit the collection and preservation of evidence of serious violations of international law," said Catherine Marchi-Uhel, who heads the body.

The ICC is tight-lipped about ongoing investigations. But the documentation of the Werfalli case, both by independent investigators and by the court itself, sheds light on how investigative processes have changed, and the role played by social media platforms.

By the time videos in the Werfalli case surfaced on Facebook and YouTube, analysts at the ICC were monitoring social media activity in the country in real time, the ICC’s Nicholls said. He declined to comment on the specifics of how the case was brought. But the videos are so shocking that they would have been difficult for investigators to miss.

After the court issued a warrant for Werfalli’s arrest last year, Higgins and his colleagues at Bellingcat took notice. They decided to try and figure out how evidence in the warrant had come together. They began trying to find some of the videos listed in the warrant, and quickly found the video of Werfalli’s associates gunning down the men in orange jumpsuits. To their surprise, it was still online, apparently posted on Facebook by an associate of Werfalli.

The next step was to try to figure out where the shooting took place. Swapping information via Slack and Twitter, the Bellingcat crew and their followers figured out what part of Benghazi the incident took place in by examining fleeting shots of high rises built by Chinese workers. In satellite images they found on Google Earth, they could even see the bloodstains the men’s bodies left on the sand after they were gunned down.

“It was so exciting that an international body like the ICC has used that evidence, taking a similar approach that we do, and to see that this arrest warrant is based on this video and that video,” said Christiaan Triebert, an investigator for Bellingcat who worked on the project.

Triebert, who’s 27 and only a couple of years out of college, loved investigations like this. It was like solving a puzzle, but one that could bring justice to some of the world’s most vulnerable people, he said. He poured himself into the research, scouring Facebook and YouTube for clues late into the night.

But it wasn’t long before that got tougher.

In late 2017, in the months after Bellingcat began investigating the Werfalli case, videos started disappearing from YouTube. First it was the original videos of executions. Then, after Triebert and his colleagues found other copies and uploaded them to the Bellingcat YouTube account, with added context, those were removed too.

YouTube says it encourages people who post videos to include context that makes it clear they’re not being posted as propaganda for a militant group. But generally speaking, a lack of this kind of context or a high level of graphic violence means videos are taken down, a YouTube spokesperson said. It’s unclear why Bellingcat, an organization known for research, had videos removed from its own YouTube account, though Higgins said he had been in touch with the company since 2013.

Social media platforms don’t want to be havens for terrorist propaganda. But Triebert and his colleagues wished YouTube had a way to provide them and other investigators access to content that could be important, rather than suddenly making it disappear. The problem is compounded for the ICC, which sometimes only starts gathering information about possible war crimes months or even years after the fact.

Last summer, Hadi al-Khatib was trying to finish a report on medical facilities in Syria that had been targeted by airstrikes when he noticed something odd. Many of the YouTube videos his team had used to research the report over the previous few months had suddenly disappeared.

“That’s how we understood we had a big problem and that we should speak to YouTube,” said Khatib, the founder of the Syrian Archive.

Khatib’s NGO, which preserves and analyzes social media content that documents atrocities in Syria, is a small organization based in Berlin. It had relied on videos posted on YouTube by ordinary Syrians since 2011 to document what the conflict looked like on the ground, providing vital information about parts of the country that few outsiders could access. Many of the videos were posted not by militants like Werfalli in Libya but by ordinary Syrians who had risked their lives to get images of what was happening in their country out into the world. These kinds of citizens were exactly who Khatib wanted to empower. It’s also likely that the disappearance of the videos hampered the work not only of researchers like Khatib, but also investigators at the UN.

At first Khatib had no idea who to reach out to at Google, which owns YouTube, but through layers of friends-of-friends, he got a response from the company and found out — to his surprise — that it was an algorithm, not a human, that had flagged those videos and thousands of others like them. A spokesperson for YouTube told BuzzFeed News that despite the use of the algorithm, the real issue was that its human content moderators, who make the ultimate decision to take down content in most cases, were not properly trained. By the Syrian Archive’s count, hundreds of accounts were taken down from the site, including those of researchers and ordinary Syrians who had, in some cases, posted the footage at great risk to themselves and their families.

“Inevitably, both humans and machines make mistakes,” a spokesperson for YouTube said. “We use errors to make our enforcement guidelines better and retrain our teams.”

The issue attracted a wave of press coverage. YouTube admitted it had made a mistake and began restoring a portion of the videos. Khatib and his colleagues have spent more than a year working with YouTube to correct the mistake, painstakingly advocating for deleted Syrian accounts. The company is not paying them. It has been, at times, a fraught process, Khatib said.

“Inevitably, both humans and machines make mistakes,” a spokesperson for YouTube said. “We use errors to make our enforcement guidelines better and retrain our teams.”

For one thing, Syrian Archive and other NGOs don’t know about every single Syrian publishing videos on YouTube, meaning many videos from legitimate sources are likely gone forever. For another, it was tough for Syrian activists to believe any algorithm could parse the mind-boggling complexity of their conflict, the many groups involved, and how to differentiate a video that was seeking to document violence for accountability from one that was simply propaganda.

“The process is quite messy. When YouTube channels are credible and they’ve been removed, we ask them to reinstate them again. But then they remove it again,” Khatib said. “We have cases where accounts have been removed three or four times at least after being reinstated.”

“We are asking YouTube to reinstate content because it might be the only evidence left for us as Syrians living abroad about human rights violations that are happening there,” he added.

A spokesperson for YouTube said that the company was working on implementing a feature where users could be given the option of making the videos private — which means they do not show up in search results and can only be viewed by invitation — rather than having them removed in some cases, but it’s unclear how widely this tool is being rolled out. In any case, the spokesperson said, both humans and machines make mistakes, but both can be trained to get better with time.

“It’s that classic question of who do you test your technologies on,” said Sam Gregory, the program director at Witness, an NGO that supports people using photos and videos to document human rights violations. “They’ve been testing their technologies in places where it’s created real societal damage where it could be avoided. AI is improving and it can play a role — but you don’t move fast and break things when you’re breaking people.”

Part of the problem is that no one knows exactly how content moderation algorithms employed by platforms like Google and Facebook work. The algorithms impact billions of people around the world, but no independent body has evaluated them. Several researchers and NGOs who had held meetings with officials from Facebook and Google, which owns YouTube, said they repeatedly asked for more transparency, but came out disappointed. The companies wanted to address the problems, they said, but it was tough to help them get better without deeper insight into how their systems work.

The major social media platforms also share a database of banned terrorism-associated photos and videos — but no one but the companies knows precisely what goes into the database or how the information is shared. Photos and videos from that database can be taken down without a human ever being involved.

“We’re seeing these platforms shifting from being private actors to assuming both public and private roles, facilitating freedom of expression, services, and other things we associate with how we live our lives,” said Emma Irving, a scholar of international law at the Leiden University in the Netherlands. “We’re seeing a shift in how we approach them, talking about how they approach human rights obligations — but we are still dealing with private companies motivated by profit. There’s not a financial incentive for them to be sharing too much.”

And despite the attention to the Syria videos, the problem has not gone away. Now, more than a year later, the Syrian Archive is working on analyzing a new batch of content about medical facilities allegedly targeted by Russian and Syrian aircraft. Their original data set included nearly 2,000 videos posted on YouTube. At least a quarter of them are gone.

Unlike law enforcement, international bodies like the UN and the ICC don’t have the power to subpoena information directly from tech companies. Like researchers and NGOs, they depend fundamentally on the largesse of the tech companies. Asked about the issue, a spokesperson from YouTube said discussions with the ICC had taken place but were ongoing.

Still, there are some signals that social media companies may cooperate in some cases — for instance, Facebook, in a little-noticed line in a statement about its presence in Myanmar, stated “we are committed to working with and providing information to the relevant authorities as they investigate international human rights violations in Myanmar, and we are preserving data for this purpose, including content on the accounts and Pages we removed in August and October.” It was a clear signal the platform preserves at least some content even after it’s taken down. The statement also came shortly after the ICC found it had jurisdiction in the genocide of Rohingya Muslims in Myanmar. Asked to elaborate on the statement, a spokesperson for Facebook said “upon receipt of legal process, we will assess and respond in accordance with applicable law and our terms of service.” A spokesperson for YouTube confirmed the platform also preserves content even after it’s taken down.

Researchers focused on Syria are consolidating and saving huge volumes of digital data that could be valuable for a hypothetical future war crimes tribunal holding perpetrators accountable. But downloaded images and posts are decontextualized, missing the metadata that proved so valuable to Triebert and his colleagues — information about who posted the images, when, and who they were linked with, for instance.

“It’s especially an issue for us as investigators who are not monitoring in real time,” Nicholls said. “If it’s gone by the time we start looking, then it’s gone … The evidence could be lost.”

For now, at least, content takedowns on Facebook and YouTube are here to stay.

“The activist groups are trying to gather and preserve this information, creating shadow Facebooks and shadow YouTubes,” said Koenig of UC Berkeley. “But how do you send out a bat signal to all these groups to gather content before it can be destroyed?” ●


Topics in this article

Skip to footer