Hundreds Of Researchers Are Trying To Replicate High-Profile Psychology Studies
Tired of flashy research that fails to hold up, psychologists around the world are taking it upon themselves to redo influential studies.
More than 400 psychologists worldwide are teaming up to fight a looming problem in their field: headline-making research that doesn’t hold up.
As part of a new network called the Psychological Science Accelerator, the researchers are trying to fix the so-called replication crisis that’s punctured splashy findings, from Diederik Stapel’s fabricated claims that messy environments lead to discrimination, to Brian Wansink’s retracted studies about eating behavior. Small sample sizes, one-off experiments, and flawed data-crunching are just some of the problems to blame.
So at the Accelerator, scientists will select a handful of influential studies, attempt to redo them, and share their results with the public, whether or not they’re able to reproduce the original finding. By collecting lots of data at lots of different sites — more than 230 labs across six continents are involved — the researchers hope to verify how robust, and how broadly applicable to different kinds of people, these discoveries actually are.
First under the microscope is a widely cited 2008 study out of Princeton University, about how people make judgments about others based on their faces. More than 120 labs will try to replicate it.
“I think we just haven’t collected enough data on enough of our questions to know how reliable some of these phenomena are,” Christopher Chartier, director of the Psychological Science Accelerator, told BuzzFeed News. This week, the group will circulate a paper that explains the project to the academic community.
The Accelerator grew out of a blog post that Chartier, an associate psychology professor at Ashland University in Ohio, penned last year. It isn’t the first effort of its kind. In 2015, the Center for Open Science's Reproducibility Project sought to replicate 100 psychology experiments — and reproduced less than half of the original findings.
But while the Reproducibility Project sought to provide an overview of the field, the Accelerator is assigning many researchers to verify just a couple studies.
Brian Nosek, a University of Virginia psychologist who led the Reproducibility Project and informally advises the Accelerator, said that both approaches have merits. The Accelerator has the potential to be “a demonstration that we can make more progress a lot faster if we work collectively,” he said.
Harvard University psychologist Daniel Gilbert, who has argued that the Reproducibility Project’s evidence of a “crisis” was overblown based on how it was carried out, said he welcomes the Accelerator’s large-scale approach.
“This is something many researchers will be excited to do because knowing when, where, and under what circumstances an effect occurs provides valuable information about both its causes and its consequences,” he said by email. “Alas, this has always been difficult for individual researchers to do this because it requires extraordinary amounts of coordination, collaboration, and cooperation between far-flung laboratories.”
The Accelerator invites anyone to submit suggestions for studies, which smaller groups then whittle down based on their influence and how easily they can be redone on limited resources. (Chartier says they are applying for grants, but so far all researchers are participating voluntarily on their own dime.) The two other studies to be replicated are about language comprehension and racial and gender biases.
Scientists can submit their own work — or others’. All the researchers contacted so far have been game, Chartier said, although that may not always be the case.
“I’m actually quite glad that they selected my research on the dimensions of social perception of faces,” Alexander Todorov, one of the Princeton psychologists behind the face-judgment study, said by email. “I always wanted to know whether it generalizes to other cultures.” (He added that he himself has replicated elements of the original paper.)
Chartier said he’s encountered some pushback from those who worry that, when dozens or hundreds of authors get involved, individual contributions will get overlooked.
But he doesn’t see why psychologists can’t take inspiration from physicists: The first direct evidence of gravitational waves, published in a landmark 2016 report, was achieved by a team of more than 1,000 scientists.
“Other fields have made this transition to really large-scale science and show it’s a sustainable model, and I think we have a lot to learn from them,” he said.