The Comment Moderator Is The Most Important Job In The World Right Now

Online platforms continue to absorb more and more of our society, while the companies in charge neglect the human beings tasked with cleaning up the mess that’s left behind.

Last week, YouTube did something unprecedented. Awash in criticism over the discovery of a network of child predators using the platform’s comment sections to share timestamps and screenshots of underage users from implicitly sexual angles, the company disabled comments on almost all videos featuring minors. Only a small number of channels featuring minors would be able to stay monetized — as long as they “actively moderate their comments.” The decision, made by a company that has long stressed the importance of algorithms, seems a tacit acknowledgement that human moderation is currently the best solution for policing harmful content.

Moderating content and comments is one of the most vital responsibilities on the internet. It’s where free speech, community interests, censorship, harassment, spam, and overt criminality all butt up against each other. It has to account for a wide variety of always-evolving cultural norms and acceptable behaviors. As someone who has done the job, I can tell you that it can be a grim and disturbing task. And yet the big tech platforms seem to place little value on it: The pay is poor, workers are often contractors, and it’s frequently described as something that’s best left to the machines.

For instance, last week, the Verge published an explosive look inside the facilities of Cognizant, a Facebook contractor that currently oversees some of the platform’s content moderation efforts. In the story, employees who requested anonymity for fear of losing their jobs described the emotional and psychological trauma of their work. Some smoked weed during breaks to calm their nerves. Others described being radicalized by the very content they were charged with policing. Most made just $28,000 a year.

Facebook moderators in developing countries like India are even worse off, according to a recent Reuters report. Contractors at Genpact, an outsourcing firm with offices in the southern Indian city of Hyderabad, each view about 2,000 posts over the course of an eight-hour shift. They make about $1,400 a year; that’s roughly 75 cents an hour.

When the barbarians are already inside the gates, you don’t tell the villagers to stay tuned for an algorithmic solution.

It’s clear that human moderators are something that platforms like Facebook or YouTube believe they can eventually optimize away. Last spring, Mark Zuckerberg, while being questioned in front of Congress, referenced artificially intelligent moderation more than 30 times. In the meantime, though, the human moderators at Facebook or YouTube spend their days getting high to numb themselves so they can keep scrubbing suicides from our News Feeds. These companies keep telling us to ignore the trash in the streets, saying it’ll all get better once they can figure out how to get the garbage trucks to drive themselves.

The mega-platforms that have turned the world into one giant comment section have shown time and time again that they have little interest in hiring real people to moderate it. Every week some new scandal flares, and we watch as Facebook, YouTube, and Twitter play whack-a-mole, promising us this sort of thing won’t happen again. Meanwhile, that thin layer separating platforms like Facebook and YouTube from complete 4chan-style chaos is cracking.

When the barbarians are already inside the gates, you don’t tell the villagers to stay tuned for an algorithmic solution.

That said, hiring human beings to moderate your comment sections, your news feeds, and your video sites is expensive and hard. Moderation requires a realistic sense of scale. There are only so many comments, posts, and videos that a human being can watch in a day. It also requires a concrete set of rules that can be consistently enforced. Right now, Facebook isn’t sure how it defines hate speech. YouTube can’t figure out what constitutes a conspiracy theory or how to properly fact-check it. Twitter won’t say whether it’d ban powerful users like President Trump for violating its user guidelines.

Whether it’s fake news, child exploitation, Russian chaos agents, marketing scams, white nationalism, anti-vaxxers, yellow vests, coordinated harassment, catfishing, doxing, or the looming possibility of an information war escalating into a nuclear one between India and Pakistan, all of it comes down to one very simple issue. Sites like Facebook, YouTube, and Twitter have failed to support clear and repeatable moderation guidelines for years, while their platforms have absorbed more and more of our basic social functions.

Older online communities like Slashdot, MetaFilter, or even Fark proved that human moderation alongside a clear set of user guidelines can work. Sure, the scale was much different than the platforms we have now, but the method was relatively effective. Right now, we don’t have enough human beings looking at what is being put on the internet, and the few that are don’t have the resources or support to do it well. When something goes wrong, like a mass panic over a maybe-real, but almost assuredly not-real suicide game like the Momo Challenge, YouTube drags its feet about doing anything about it and then finally decides to try to erase the whole thing by demonetizing all Momo-related videos. But it only gets around to this long after Kim Kardashian West and law enforcement agencies have started freaking out about it.

We’re all trapped in an endless comment thread from Reddit’s /r/The_Donald without a mod in sight.

Maybe one day AI will be able to instantly and effectively police the whole internet, but in the meantime, we’re all trapped in an endless comment thread from Reddit’s /r/The_Donald without a mod in sight. Community moderators, content moderators, audience development editors — they’re all shades of the same extremely important role that has existed since the birth of the internet. It’s the person looking at what’s being posted to a website and decides if a piece of content or a user should stay there or be taken down. It’s like combining a sheriff and a librarian.

Usenet, one of the first real online communities, had moderators for some of its newsgroups. Subreddits have them. Discord servers have them. Online communities have always been defined, in some way, by their moderators. Comedy website Something Awful famously had a huge list of every banned user and why they were banned. The feminist humor site the Toast used to have one of the nicest, most uplifting comment sections on the internet. One of the strictest, most heavily censored communities on the internet for a while was the Neopets message board, with mod drama kicking off weekly.

For about nine months, I worked as BuzzFeed’s comment moderator. Every day, I’d come in and pull up a feed showing every comment on the website. The feed had about 100 comments on it at a time. When I asked how many I should clear a day, I was told about nine refreshes a day was normal. BuzzFeed’s system in 2012 was actually a lot more intuitive than other moderation feeds I had seen. There was a toolbox that hovered to the right of the screen, allowing me to block comments as I saw them or ban users outright. My favorite tactic was a moderation technique called “shadow banning,” where the user doesn’t know that they were the only ones who could read their comments.

For the most part, I would delete spam, give fun stickers to funny commenters, and scan for hate speech. At the time, BuzzFeed was still pretty small, so it wasn’t a particularly difficult job.

The hardest days, though, were when we’d get attacked by another online community. The tactic is called “astroturfing,” and usually a community like Reddit or 4chan, or the neo-Nazi message board Stormfront, would flood our comment sections with gore, pornography, and hate speech. If this sort of thing happened overnight — which it usually did — I’d end up working through lunch to clean things up. After days like that, I’d usually spend my nights silently staring off into space, not because I was particularly traumatized, but because there’s really only so much vitriol and toxicity a person can absorb before it all stops meaning anything.

Those astroturf days are what every day is like now.

This might be surprising, but even 4chan has moderators. They’re appropriately called janitors. The site is, of course, nearly impossible to moderate. It’s an anonymous free-for-all. The only way to really ban a user is to block their IP address. Janitors’ quixotic quest to clean the internet’s deepest pit of misery is a meme there. Users will wait until the middle of the night and flood the boards with grotesque images, writing, “mods are asleep.”

In the first decade of 4chan’s lifespan, the site was the exception. They were the anarchic alternative to a world of mods and admins and forum etiquette. Most of its early users were people who were banned by Something Awful for being terrible. But now, it seems like the only difference between 4chan and Facebook or YouTube is that the big platforms have a legion of underpaid contractors working with an algorithm to quickly push the toxicity under the rug if enough people make a fuss about it.

In 2009, one of 4chan’s janitors wrote about the job in a Reddit AMA. In response to a question about whether 4chan is really that bad of a community, the janitor wrote something that will sound familiar to anyone who has spent any time on Facebook, YouTube, or Twitter since 2015.

“Everyone else seems to be acting like a racist crazy sociopath, so people who are even borderline act up because they can,” the janitor wrote. “If your neighbour has set his house on fire and then tried to piss it out and the other just shoots at passersby out the window, you’re not going to worry too much if your lawn starts to die.”

Skip to footer