How Facebook Handled A Fake Photo Of Mark Zuckerberg In A Nazi Uniform

Amid an unprecedented content moderation effort filled with unclear choices, Facebook is figuring out where to draw the line between being proactive and limiting speech.

Earlier this year, a simple Facebook search for Mark Zuckerberg’s name returned an unexpected result: an image of the Facebook founder in Nazi uniform presented at the top of his photos, directly underneath his verified profile. The photoshopped picture left Facebook with two undesirable choices: It could either delete the inflammatory photo and risk accusations of censorship, or willingly host an image of its CEO in a Nazi outfit.

Neither option was particularly appealing for Facebook, but the situation was not unfamiliar. Though some of the company’s content moderation decisions have clear rationales — removing child pornography, for example — many of those it faces are similar its fake Zuckerberg picture problem: dilemmas with no easy answers and potentially fraught consequences.

These tough choices, and the philosophy with which Facebook approaches them, are becoming increasingly important now that the company is in the midst of an unprecedented escalation of its content moderation efforts. The 2 billion–user platform is in the process of hiring 4,000 new moderators following intense public scrutiny over its bungled handling of violent content, fake news, and a Kremlin-backed effort to sow discord in the US during an election year. Facebook’s human moderators will now be bringing the company’s values to bear on more decisions about content that falls in gray areas, a fact often lost in the discussion of the need for the company to do better.

For an increasingly interventionist Facebook, now comes the hard part: figuring out just how to wrangle the most difficult content problems on its vast platform — racism and hate speech, misinformation and propaganda, Mark Zuckerberg photoshopped into a Nazi uniform.

“This is really a globally diverse population and people across the world are going to have really different ideas about what is appropriate to share online, and where we should draw those lines,” Facebook policy head Monika Bickert told BuzzFeed News.

Of the many complex issues Facebook faces, some of the thorniest emerged during the November hearings in Washington when Facebook, Google, and Twitter were called into Congress to discuss Russia’s manipulation of their platforms during the 2016 presidential election. Amid intense grilling from lawmakers, the platforms’ lawyers repeatedly promised to do better.

But just how Facebook should deliver on that promise remains a major question. “There are going to be innumerable dilemmas which will not have easy answers,” Rep. Adam Schiff, the ranking member on the House Intelligence Committee, told BuzzFeed News following the hearings. “Even with an outfit like [Russian television network] RT, the questions are going to be difficult.”

RT has been called “the Kremlin’s principal international propaganda outlet" by the US intelligence community, making it a hot potato for Facebook, which was upbraided by Congress for enabling the Russians’ efforts to disrupt the 2016 election. Facebook does have the power to limit the spread of RT content on its platform, but a decision to do so is fraught. Allowing RT’s content to move through its network effectively turns Facebook into something of a propaganda delivery mechanism. But silencing RT would have consequences too, dealing a blow to free expression on a platform that hosts a great deal of public discussion.

Kyle Langvardt, an associate professor at University of Detroit Mercy Law who studies the First Amendment, warned of the danger of removing political content from a platform like Facebook. “If private companies are deciding what materials can be censored or not, and they’re controlling more and more of the public sphere, then we essentially have private companies regulating the public sphere in a way we would never accept from the state,” he said.

"The fact that these companies aren’t the government makes what they’re doing even more disturbing."

As Langvardt indicated, the problems that might arise from Facebook regulating political speech are troubling indeed. The platform is used by 2.07 billion people each month, and its News Feed has become a de facto town square, a place where 45% of Americans say they get news, according to the Pew Research Center. There are few checks on Facebook’s power to remove content; it posts no public record of the content it removes, making it nearly impossible for third parties to hold it accountable for moderation decisions. “In a lot of ways, the fact that these companies aren’t the government makes what they’re doing even more disturbing,” Langvardt said. “If they were the government at least they’d be accountable to political process.”

Facebook does at least appear interested in more transparency measures. "We want to be more transparent about the ways we enforce against problematic content on Facebook, and we're looking at ways to do that going forward," a spokesperson said.

Currently, Facebook isn't treating RT differently from other content on its platform, Bickert said. “Their relationship with their government is not a disqualifier to us,” she said. “If they were to publish something that violated our policies, we would remove it.” Facebook is also continuing to allow RT to advertise, unlike Twitter, which banned the publication from its ad platform after offering it 15% of its 2016 US elections ad inventory for $3 million.

Fake news — another topic discussed in those November hearings in DC — may be an even trickier area for Facebook, especially now that critics have alleged false information on the platform may have contributed to the murder of thousands of Myanmar's Rohingya ethnic group. When news reports of the events surfaced, people accused Facebook of facilitating genocide; some called for the company to set up emergency teams to deal with the issue. But here too, a proper approach isn’t immediately apparent. Had Facebook deployed an emergency team to delete false content about Myanmar, it would have had to make blunt judgement calls on a conflict thousands of miles away from its headquarters. And indeed, when Facebook did finally take action and removed some posts documenting military activity in Rohingya villages, Rohingya activists accused it of erasing evidence of an ethnic cleansing.

Philosophically, Facebook doesn’t particularly want to remove false content. “I don’t think anyone wants a private company to be the arbiter of truth,” Bickert said. “People come to Facebook because they want to connect with one another. The content that they see is a function of their choices. We write guidelines to make sure we’re keeping people safe. We want our community to determine the content that they interact with.” Facebook has recently introduced a number of products to limit the spread of fake news, but the early results are incomplete.

 “I don’t think anyone wants a private company to be the arbiter of truth.”

Even with clear guidelines, moderation can be fraught. How does a platform like Facebook police harassment conducted via codeword? How does it handle racial slurs used in an educational or historical context? How does it handle a swastika in satirical context? And though Facebook has a clearly defined position against regulating political speech, that speech can still be subject to the moderation team should it come up against other Facebook rules such as those prohibiting offensive speech.

Given the level of discretion involved, there’s a fine line between proactively removing misinformation, hate speech and abusive content, and censorship. We’re already seeing signs of what a more aggressive platform looks like. Twitter, with similar rules to Facebook, is taking a more interventionist approach to speech following a long history of ignoring harassment. Its current campaign of de-verifications, account locks, and account suspensions is starting to have a noticeable secondary impact — leaving more than a few people on both sides of the aisle claiming to be unfairly silenced. “Twitter locked my account for 12 hours for calling out that racist girl,” one user wrote in October in a statement no longer surprising for the platform.

Facebook has used a lighter touch than Twitter, but its missteps over the past year have pushed it further under the microscope and relentlessly held it there. “Not just Facebook, but all mainstream social media platforms' practices are under scrutiny in a way that they never have been before,” Sarah Roberts, an assistant professor at UCLA who’s been studying content moderation for seven years, told BuzzFeed News. “Facebook, whether justified or not, seems to bear the brunt of that scrutiny.”

And as Facebook tries to find the right content moderation balance, it also faces another challenge: keeping its thousands of moderators on the same page so they apply its rules consistently. David Wilner, an early Facebook employee who helped set up the company’s original content moderation effort, told BuzzFeed News this is the hardest part of job. “For the most part, common sense isn’t a real thing. Sure, there is a set of fundamentals that most people will agree are good — helping a crying baby, for instance. But once you get beyond broad strokes about very, very basic flesh-and-blood questions, there’s no natural consensus,” Wilner said. “Everyone brings their history to each decision — their childhood, race, religion, nationality, political views, all of it. If you take 10 of your friends and separately give them 50 difficult examples of content to make decisions on — without any discussion — they will disagree a lot. We know this because we tried it.”

In the past, Facebook has shown a tendency to attack its problems with brute force, an approach that might prove disastrous were it applied to this new content moderation push, which is driven in part by intense public pressure. Facebook has already demonstrated the problems this approach can lead to when earlier this year it widely took down posts with the word “dyke” even when it was used in self-referential and not hateful ways. Upon receiving more context, Facebook restored many of the posts.

But if the way Facebook handled the Zuckerberg Nazi photoshop is an indication, the company appears to be taking a more nuanced approach to delicate content issues. Facebook did not delete the image. Instead, it left it up on the platform. The image no longer appears in the top results for searches of "Mark Zuckerberg," and a spokesperson said it was pushed down in Zuckerberg's search results in a sitewide update meant to improve relevance.

Topics in this article

Skip to footer