What can psychological researchers do to help solve the replication crisis?

The Psychological Science Accelerator could be the future of the field around the globe — if they can sustain it.

By Brian Resnick@B_resnickUpdated Apr 7, 2021, 6:03pm EDT

Share this story

  • Share this on Facebook
  • Share this on Twitter

Share All sharing options for: The replication crisis devastated psychology. This group is looking to rebuild it. 

  • Reddit
  • Pocket
  • Flipboard
  • Email

What can psychological researchers do to help solve the replication crisis?
Vox

Brian Resnick is Vox’s science and health editor, and is the co-creator of Unexplainable, Vox's podcast about unanswered questions in science. Previously, Brian was a reporter at Vox and at National Journal.

The 2017 Great American Solar Eclipse left Chris Chartier feeling, well, a little jealous.

Chartier, like so many Americans, was awed by the whole country coming together to celebrate a force of nature. Chartier is a psychologist, and he also started to think of how precise the eclipse forecast was. Astronomers knew, down to the second, when the moon would cross the path of the sun; where, precisely, its shadow would land; and for how many seconds the sun would appear to be blocked out for those on the ground.

Chartier’s field — social psychology — just doesn’t have that type of accuracy. “Things are really messy,” says Chartier, who’s an associate professor at Ashland University in Ohio. Psychology “is nowhere near being at the level of precision of astronomers or physicists.”

Things in psychology are more than messy — the field has been going through a very public, and painful, crisis of confidence in many of its findings. So he began to wonder: How could psychology one day wow the world with precise science of its own?

His idea was audacious: psychologists all around the world, working together to rigorously push the science forward. But it quickly became real: The Psychological Science Accelerator was born in 2017.

This year, the group published its first major paper on the snap judgments people make of others’ faces, and it has several other exciting large-scale projects in the works. Its early success suggests the accelerator could be a model for the future of psychology — if the scientists involved can sustain it.

The Psychological Science Accelerator, explained

For the past 10 years, psychology has been struggling through what’s called the “replication crisis.”

In summary: About a decade ago, many scientists realized that their standard research methods were delivering them false, unreliable results.

When many famous and textbook psychological studies were retested with more rigorous methods, many failed. Other results simply looked less impressive upon reinspection. It’s possible around 50 percent of the published psychological literature fails upon retesting, but no one knows precisely the extent of the instability in the foundations of psychological science. The realization provoked a painful period of introspection and revision.

For more on the origins of the replication crisis, check out this week’s episode of Unexplainable.

Chartier’s idea for the accelerator was inspired by global, massive projects in physics, like CERN’s Large Hadron Collider or the LIGO gravitational wave observatory. The accelerator is a global network of psychologists who work together on answering some of the field’s toughest questions, with methodological rigor.

There’s an old model for conducting psychological research: done in small labs, run by one big-name professor, probing the brains of American college undergrads. The incentives built into this model have favored publishing as many papers with positive results as possible (those that show statistically significant results, but not those that turned up bupkis) over rigorous inquiry. This old model has produced a mountain of scientific literature — but a lot of it has failed upon closer inspection.

Under this structure, researchers had arguably too much freedom: freedom to report positive findings but keep negative findings in a file drawer; to stop conducting an experiment as soon as desired results were obtained; to make 100 predictions but only report the ones that panned out. That freedom led researchers — often unwittingly, and without malicious intent (a lot of the practices were to make best use of scant resources) — to flimsy results.

“Given that the vast majority of research and psychology is done in the individual lab model, we need other models to have a diversity of process and see how that affects the quality of work that’s produced,” Simine Vazire, a personality psychologist at the University of Melbourne who is not involved in the accelerator, says.

Chartier dreamed of a distributed lab network, with researchers in outposts all around the world, who could work together, democratically, on choosing topics to study and recruiting a truly global, diverse participant pool to use in experiments. They’d preregister their study designs, meaning they promise to stick to a particular recipe in running and analyzing an experiment, which staves off the cherry-picking and p-hacking (a variety of practices to get data to yield a false positive) that was rampant before the replication crisis became apparent.

They’d keep everything transparent and accessible, and foster a culture of accountability to produce rigorous, meaningful work. The payoff would be to deeply study human psychology on a global scale, and to see in which ways human psychology varies around the world, and which ways it does not.

As soon as this idea crystallized in his mind, Chartier got to his computer and wrote up a manifesto on his blog, headlined “Building a CERN for Psychological Science.”

He then posted the piece to Twitter, and emails started pouring in. Researchers all around the world wanted to sign up.

Researchers like Hannah Moshontz, a psychologist at the University of Wisconsin Madison, saw the post and immediately wanted to contribute. “I just jumped at the chance,” Moshontz says. “It just felt like this is the cutting edge, this is what we should be doing.”

Today, the Psychological Science Accelerator is made up of over 500 laboratories, representing more than 1,000 researchers, in 70 countries around the world.

They’ve all committed to remaining transparent, being rigorous, and making decisions about what to study. “They have, I think, a lot more accountability built into the process,” says Vazire.

Though accountability can sometimes lead to friction.

The accelerator’s first challenge was testing an influential theory of how we judge faces around the world

This past January, the Psychological Science Accelerator published its first major findings in the journal Nature Human Behavior. The study put the influential theory for how we make snap judgments of people’s faces to a huge international test.

The theory is called the valence-dominance model, and it suggests we evaluate people’s faces on two broad dimensions: how dominant their face appears, and how generally negative or positive they seem. Most of the research done on this model has taken place in the US or Europe. So the accelerator simply wanted to know: Does this model explain how people all around the world judge the faces of others?

The final paper included more than 11,000 participants (huge for a psychology study) in 41 countries. And there are 241 co-authors listed on the paper.

The results? Broadly speaking, this influential model replicates around the world. But the accelerator also included a new type of analysis of the data, which reveals some slight fissures. Outside of Western context, this analysis finds, “there may be a third dimension that emerges,” Chartier says, suggesting an interesting way people around the world might vary in how they perceive faces. “In other world regions, people just don’t really seem to have good agreement about who looks dominant,” Chartier says. It’s a wrinkle that wouldn’t have arisen if this collaboration had only been conducted in the United States or Europe.

But this conclusion wasn’t reached without some tension.

Alexander Todorov, the psychologist who originally co-authored this model of face perception in the 2000s, was brought on to advise and consult on the study design. Todorov originally signed off on the experimental design and analysis plan for the study, which was then preregistered, meaning the team was locking in their recipe for the experiment and couldn’t change it based on the results.

But after this study design was registered and the data started pouring in, Todorov started to think the recipe for the new, cutting-edge analysis needed to be tweaked.

The world needs more wonder

The Unexplainable newsletter guides you through the most fascinating, unanswered questions in science — and the mind-bending ways scientists are trying to answer them. Sign up today.

Todorov argues that the inflexibility of the preregistration — the plan is submitted to the journal before any data is collected — is problematic. “Imagine you’re a brain surgeon and you preregister all of the steps of your brain surgery,” Todorov says. “And then you started poking in the brain of your patients. And you said, ‘Oops, if I don’t do this, you’re going to kill him or her.’ Would you change the procedures?”

The accelerator, journal editors, and outside experts reviewed the analysis plan. The journal ended up adjudicating the dispute, and in the end, the accelerator went forward with the original plan.

The specifics of Todorov’s and the accelerator’s arguments about the data analysis here get technical. But there’s an important broader point this bit of friction makes clear.

In the past, someone with Todorov’s standing would have had a lot of leeway to tweak an experimental analysis after data started to come in. But that sort of freedom to deviate from the experimental plan is part of why psychology fell into crisis. It’s too easy to make these small tweaks, and subtly (and not overtly intentionally) influence the results of a study to a desired outcome. Whether Todorov is right or wrong in this case about the analytic plan is beside the bigger point, but shows how determined this new collaboration approach is.

Ultimately, Todorov is supportive of the accelerator and its mission. “I think it’s a great way forward,” he says of the group. “We did have some disagreements. But that’s okay.” There are a lot of strong elements to the projects, he notes, such as the transparency of the research. “Everybody can go and analyze the data and make a judgment for themselves.”

In the end, the paper published, but Chartier said it was an exhausting process. Not just in dealing with Todorov’s objections — coordinating hundreds of people is also tough work.

The accelerator’s plans for the future — and what could get in the way

So far, the accelerator has only published the face perception research. But there are more projects in the works.

In light of the pandemic, the participants have turned their global network to studying coping mechanisms during stressful times. For example, one research effort is testing out if a technique used to reduce stress and anxiety (called cognitive reappraisal) works around the world.

Additionally, the group is looking into whether studies on how people answer the philosophical “trolley problem” replicate around the world, and how gender prejudice rears its head globally.

Beyond individual studies and replications, the team also hopes to just generate lots of good, scientifically sound psychological data on people around the world, for other researchers to use as reference.

The slate of research projects is ambitious and promising, but it faces many challenges. The accelerator is potentially a model for the future, but it still has to operate in the existing status quo of academia, including very limited funding and a lack of incentives from institutions for researchers — especially more junior faculty — to sign on to these large projects.

Under the current status quo, researchers get ahead and make progress in their careers by being the primary author on a big-idea study, not by being one of hundreds of authors playing a bit role in a huge project.

Its members are also largely volunteers, and mostly from North America and Europe.

“We wanted it to be much more diverse, and we’re still struggling with that,” says Dana Basnight Brown, a cognitive psychologist at United States International University-Africa in Nairobi, Kenya. “We certainly do have members in South America. Southeast Asia has quite a vibrant community, and we have a lot of individuals from Indonesia, Philippines, Taiwan. But Africa, [there’s] very low representation.”

Why psychologists need to get psychology right

Despite the challenges, the work continues. The members of the Psychological Science Accelerator still believe in the value of psychological research, even though — and perhaps because — the recent history of the replication crisis is upsetting to them.

“Psychology matters, and getting it right matters, because this is the science of the human experience,” Chartier says. “If you can just marginally improve the way we collect and analyze our data and draw conclusions from them, there are untold future human beings that can benefit from that tiny advance.”

Good science is a gift we give to the future. Today, we have the gift of eclipse predictions from scientists from the past. We don’t yet know what specific gifts a more scientifically sound and globally equitable field of psychology could give us. But whatever they might be, they now have the potential to be durable and powerful for the entire world.

Byrd Pinkerton contributed reporting.

Help us reach 3,000 gifts by December 31

We believe that everyone deserves access to clear, factual information that helps them educate themselves on the issues of the day and the things that pique their curiosity. This year alone, our newsroom published 2,500+ articles, 100+ videos, and 650+ podcasts that have informed and educated millions of people around the world — for free.

Reader gifts have helped us bring you this work for free while relying less on advertising. If you value Vox, please make a gift during our year-end campaign and help us reach our goal of adding 3,000 new gifts by December 31.

One-Time Monthly Annual

$95/year

$120/year

$250/year

Other

$

Yes, I'll give $120/year

Yes, I'll give $120/year

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

What can psychological researchers do to help solve the replication crisis?

Next Up In Science & Health

Most Read

  1. The Cheesecake Factory knows what you want
  2. The money party is over
  3. 9 breakthroughs this year that gave us hope for the future

vox-mark

Sign up for the newsletter Future Perfect

Each week, we explore unique solutions to some of the world's biggest problems.

Thanks for signing up!

Check your inbox for a welcome email.

Email (required)

Oops. Something went wrong. Please enter a valid email and try again.

By submitting your email, you agree to our Terms and Privacy Notice. You can opt out at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. For more newsletters, check out our newsletters page.

What are some of the solutions to the replication crisis in psychology?

One solution to this problem is to increase the number of observations with repeated measurements, but this is also not always possible or not much cheaper. Limited resources are the main reason why psychologists are often conducting underpowered studies.

How can psychology improve its replicability?

Transparency and complete reporting are key enablers of replicability. No matter how research is conducted, it is essential that other researchers and the public can understand the details of the research study as fully as possible.
Psychologists and other social science researchers often use replication studies to test the validity of existing studies. By repeating an experiment, psychologists can compare their data to the findings of the original study to determine the validity of the original experiment's results.

What can reduce the replicability of a study?

“However, factors such as lack of transparency of reporting, lack of appropriate training, and methodological errors can prevent researchers from being able to reproduce or replicate a study.