Facebook Issues New Experiment Guidelines, But It's Still Lousy Science

The social network doubles down on the results of its controversial — and flawed — study.

Following the intense backlash to its recent user experiment, yesterday Facebook announced improved guidelines for research at the company, saying it would subject new studies involving special groups of people or topics considered deeply personal (think emotions) to more extensive review within the company. Further, the company will improve training for employees and keep its research site up-to-date with the company's latest academic publications. These aren't particularly bold steps for the company, but they are welcome ones.

But lost amid the furor over the study's ethics was why, exactly, the company conducted the experiment in the first place. As yesterday's statement makes clear, the company wanted to investigate the notion that Facebook makes us unhappy. Is it true that the ostensibly happy world of Facebook could have a negative effect on our psyche? That positive posts from friends might actually make us feel bad? It's certainly plausible, and many studies suggest this. Sometimes we might feel left out, or like we're not nearly as popular or accomplished or well-traveled as our friends. So Facebook decided to put these notions to the test—to help put them to bed.

And, indeed, as yesterday's statement notes, the experiment finds that "people respond positively to positive posts from their friends," not negatively. The lead author of the experiment, Adam Kramer, put it more bluntly: "We found the exact opposite to … the conventional wisdom. Seeing a certain kind of emotion (positive) encourages it rather than suppresses" it.

Unfortunately for us readers, the study seems almost designed to produce that very conclusion. Rather than rely on private reports of how Facebook makes them feel, Facebook instead relied on status updates as an indicator of how people feel. But as common sense (and much research, including Facebook's own research) would indicate, status updates are a biased representation of how we feel. In particular, the social science implies that when positive posts on Facebook make us feel bad, we won't broadcast those feelings to our friends—we'll keep them to ourselves.

Some of the basic science of status updates should have given Facebook's researchers pause about their methods. We tend to post when we're more emotionally aroused, so high-arousal emotions like anger, anxiety, and excitement will tend to be shared (and spread). But we tend not to post low-arousal emotions, like feeling sad, lonely, left out, or peaceful. This means using status updates as a measure of how we feel will undercount these low-arousal emotions. The sentiment analysis algorithm Facebook used may also be biased against these low-arousal negative emotions.

The problem is, sadness, loneliness and feeling left out are precisely some of the emotional consequences Facebook is trying to rebut, which means the social network's study may have preordained its own conclusions.

In the end, it's hard to know what to make of the methodological holes in Facebook's study—you'd hope that Facebook, above all, would understand the limits of status updates as a window into the human experience. There's no reason to believe Facebook meant to publish biased research, but it should have bent over backwards to convince us consumers of the research that the study is truly and fully impartial. This, Facebook did not do—and as a result, they have lost trust, not gained it.

Galen Panger is a Ph.D. candidate in the School of Information at the University of California, Berkeley. He recently wrote, "Why the Facebook Experiment is Lousy Social Science." Send him an email or tweet with your thoughts and comments.

Skip to footer