Registered Replication Reports are open for submissions!

Science is broken; let’s fix it. This has been my mantra for some years now, and today we are launching an initiative aimed squarely at one of science’s biggest problems. The problem is called publication bias or the file-drawer problem and it’s resulted in what some have called a replicability crisis.

When researchers do a study and get negative or inconclusive results, those results usually end up in file drawers rather than published. When this is true for studies attempting to replicate already-published findings, we end up with a replicability crisis where people don’t know which published findings can be trusted.

To address the problem, Dan Simons and I are introducing a new article format at the journal Perspectives on Psychological Science (PoPS). The new article format is called Registered Replication Reports (RRR).  The process will begin with a psychological scientist interested in replicating an already-published finding. They will explain to we editors why they think replicating the study would be worthwhile (perhaps it has been widely influential but had few or no published replications). If we agree with them, they will be invited to submit a methods section and analysis plan and submit it to we editors. The submission will be sent to reviewers, preferably the authors of the original article that was proposed to be replicated. These reviewers will be asked to help the replicating authors ensure their method is nearly identical to the original study.  The submission will at that point be accepted or rejected, and the authors will be told to report back when the data comes in.  The methods will also be made public and other laboratories will be invited to join the replication attempt.  All the results will be posted in the end, with a meta-analytic estimate of the effect size combining all the data sets (including the original study’s data if it is available). The Open Science Framework website will be used to post some of this. The press release is here, and the details can be found at the PoPS website.

Professor Daniel J. Simons (University of Illlinois) and I are co-editors for the RRRs.  The chief editor of Perspectives on Psychological Science is Barbara A. Spellman (University of Virginia), and leadership and staff at the Association for Psychological Science, especially Eric Eich and Aime Ballard, have also played an important role (see their press release).

Three features make RRRs very different from the usual way that science gets published:

1. Preregistration of replication study design and analysis plan and statistics to be conducted BEFORE the data is collected.

  • Normally researchers have a disincentive to do replication studies because they usually are difficult to publish. Here we circumvent the usual obstacles to replications by giving researchers a guarantee (provided they meet the conditions agreed during the review process) that their replication will be published, before they do the study.
  • There will be no experimenter degrees of freedom to analyse the data in multiple ways until a significant but likely spurious result is found. This is particularly important for  complex designs or multiple outcome variables, where those degrees of freedom allow one to always achieve a significant result. Not here.

2. Study is sent for review to the original author on the basis of the plan, BEFORE the data come in.

  • Unlike standard replication attempts where the author of the published, replicated study sees it only after the results come in, we will catch the replicated author at an early stage. Many will provide constructive feedback to help perfect the planned protocol so it has the best chance of replicating the already-published target effect.

3. The results will not be presented as a “successful replication” or “failed replication”. Rarely is any one data set definitive by itself, so we will concentrate on making a cumulative estimate of the relevant effect’s size, together with a confidence interval or credibility interval.

  • This will encourage people to make more quantitative theories aimed at predicting a certain effect size, rather than only worrying about whether the null hypothesis can be rejected (as we know, the null hypothesis is almost never true, so can almost always be rejected if one gets enough data).

This initiative is the latest in a long journey for me. Ten years ago, thinking that allowing the posting of comments on published papers would result in flaws and missed connections to come to light much earlier, David Eagleman and I published a letter to that effect in Nature and campaigned (unsuccessfully) for commenting to be allowed on PubMed abstracts.

Since then, we’ve seen that even where comments are allowed, few scientists make them, probably because there is little incentive to do so and doing it would risk antagonising their colleagues. In 2007 I became an academic editor and advisory board member for  PLoS ONE, which poses fewer obstacles to publishing replication studies than do most journals. I’m lucky to have gone along on the ride as PLoS ONE rapidly became the largest journal in the world (I resigned my positions at PLoS ONE to make time for the gig at PoPS). But despite the general success of PLoS ONE, replication studies were still few and far between.

In 2011, Hal Pashler, Bobbie Spellman, Sean Kang and I started PsychFileDrawer, a website for researchers to post notices about replication studies. This has enjoyed some success, but it seems without the carrot of a published journal article, few researchers will upload results, or perhaps even conduct replication studies.

Finally with this Perspectives on Psychological Science initiative, a number of things have come together to overcome the main obstacles to publication studies: fear of antagonising other researchers and the uphill battle required to get the study published. Some other worthy efforts to encourage replication studies are happening at Cortex and BMC Psychology.

If you’re interested in proposing to conduct a replication study for eventual publication, check out the instructions and then drop us a line at replicationseditor @!

7 thoughts on “Registered Replication Reports are open for submissions!

  1. Reblogged this on …not that kind of psychologist and commented:
    Just more on the new. From Alex Holcombe’s blog (he is involved in more things to fix science). Go read!

  2. Alex – This is great news! Can you say a little bit more about the how-to of submitting the initial proposal? The directions on the PoPS website say to “submit the completed pre-submission inquiry form through the Perspectives submission portal as a new manuscript,” but the dropdown menus and so on in the submission portal do not seem to have any correspondence to the new submission type. Should we just ignore the irrelevant fields in the submission portal and submit there anyway? Or is an update to the portal coming?

  3. Hi Katherine, thanks for your interest! I’ll get back to you about the website but for now please download the inquiry form from and send it to replicationseditor @ .

  4. In the submission website (, you can now choose “Replication Proposal” and then upload the Presubmission inquiry form ( Thanks to Amy Drew of APS for modifying the Sage system to try to accommodate this new article type.

  5. Have you considered conducting/publishing meta-analyses or subgroup analyses, funnel plot etc. and finding out which specific areas i.e. methods in individual studies are off the mark? That way the offending studies are replicated and that would make things more systematic and efficient. Here’s a gentle intro:

    some articles:

    • I love meta-analyses and funnel plots, but unfortunately we don’t have the resources to be running them in-house at the journal. But I hope meta-analysis experts will be doing more and that proposers of RRRs will use their results to motivate their RRR proposals.

  6. You can plot it on RevMan offline to have an idea where certain biases are occurring, and potentially save resources by targeting specific areas. Might be better than doing an experiment from absolute scratch. Even with a full replication the problem may not be sussed out due to issues of heterogeneity, low power, small sample size, no blinding etc. I noticed common in psych experiments. Might be more practical to detect studies way off the true effect and in comparison to the pooled summary. For instance, before conducting an RCT (Randomized Controlled Trial) a quick meta-analysis is done to see where the trend is and what weaknesses to watch for before registering a trial. Takes less than 15mins if you’re from the field. Easy to see on Forest/Funnel plot where things are off. It’s not definitive though, there are controversies. At least the trend can indicate to some extent a reflection of some effect, be it true or chance or others. The “off” studies in medicine are usually the ones replicated and drugs taken off the shelf. There are also biases concerning trial registrations – like surrogacy endpoint. Hiring or collaborating with a meta-analyis psychology expert might be worthwhile.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s