Alex Holcombe's blog

open science, open access, meta-science, perception, neuroscience, …

Protect yourself during the replicability crisis of science

with 2 comments

Scientists of all sorts increasingly recognize the existence of systemic problems in science, and that as a consequence of these problems we cannot trust the results we read in journal articles. One of the biggest problems is the file-drawer problem. Indeed, it is mostly as a consequence of the file-drawer problem that in many areas most published findings are false.

Consider cancer preclinical bench research, just as an example. The head of Amgen cancer research tried to replicate 53 landmark papers. He could not replicate 47 of the 53 findings.

In experimental psychology, a rash of articles has pointed out the causes of false findings, and a replication project that will dwarf Amgen’s is well underway. The drumbeat of bad news will only get louder.

What will be the consequences for you as an individual scientist? Field-wide reforms will certainly come, partly because of changes in journal and grant funder policies. Some of these reforms will be effective, but they will not arrive fast enough to halt the continued decline of the reputation of many areas.

In the interim, more and more results will be viewed with suspicion. This will affect individual scientists directly, including those without sin. There will be:

  • increased suspicion by reviewers and editors of results in submitted manuscripts (“Given the history of results in this area, shouldn’t we require an additional experiment?“)
  • lower evaluation of job applicants for faculty and postdoctoral positions (“I’ve just seen too many unreliable findings in that area“)
  • lower scores for grant applications (“I don’t think they should be building on that paper without more pilot data replicating it“)

These effects will be unevenly distributed. They will often manifest as exaggerations of existing biases. If a senior scientist already had a dim view of social psychology, for example, then the continuing replicability crisis will likely magnify his bias, whereas his view of other fields that the scientist “trusts” will not be as affected by the whiff of scandal, at least for awhile- people have a way of making excuses for themselves and their friends.

But there are some things you can do to protect yourself. These practices will eventually become widespread. But get a head start, and look good by comparison.

  • Preregister your study hypotheses, methods, and analysis plan. If you go on record with your plan before you do the study, this will allay the suspicion that your result is not robust, that you fished around with techniques and statistics until you got a statistically significant result. Journals will increasingly endorse a policy of favoring submitted manuscripts that have preregistered their plan in this way. Although websites set up to take these plans may not yet be available in your field, they are coming, and in the meantime you can post something on your own website, on FigShare perhaps, or in your university publicly accessible e-repository.
  • Post your raw data (where ethically possible), experiment code, and analysis code to the web. This says you’ve got nothing to hide. No dodgy analyses, and you welcome the contributions of others to improve your statistical practices.
  • Post all pilot data, interim results, and everything you do to the web, as the data come in. This is the ultimate in open science. You can link to your “electronic laboratory notebooks” in your grants and papers. Your reviewers will have no excuse to harbor dark thoughts about how your results came about, when they can go through the whole record.

The proponents of open science are sometimes accused of being naifs who don’t understand that secretive practices are necessary to avoid being scooped, or that sweeping inconvenient results under the rug is what you got to get your results into those high impact-factor journals. But the lay of the land has begun to change.

Make way for the cynics! We are about to see people practice open science not out of idealism, but rather out of self interest, as a defensive measure. All to the better of science.

About these ads

Written by alexholcombe

August 29, 2012 at 9:29 pm

2 Responses

Subscribe to comments with RSS.

  1. This is a great explanation of what problems are looming. Thanks, Alex. I can’t help but mention one thing I’ve been working on with Elizabeth at Science Exchange, PLOS, and Figshare that could help researchers defend their field or technique from these effects. We recently launched the Reproducibility Initiative, which is designed to shift this trend by providing a positive reinforcement for doing high-quality robust research. We’re currently working with some disease foundations to selectively screen papers which have opted-in to the initiative, but you can enroll your own work at http://reproducibilityinitiative.org

    mrgunn (@mrgunn)

    January 22, 2013 at 11:04 am

  2. this reads a bit like fear mongering

    bashir

    August 26, 2013 at 10:52 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 766 other followers