Protect yourself during the replicability crisis of science
Scientists of all sorts increasingly recognize the existence of systemic problems in science, and that as a consequence of these problems we cannot trust the results we read in journal articles. One of the biggest problems is the file-drawer problem. Indeed, it is mostly as a consequence of the file-drawer problem that in many areas most published findings are false.
Consider cancer preclinical bench research, just as an example. The head of Amgen cancer research tried to replicate 53 landmark papers. He could not replicate 47 of the 53 findings.
In experimental psychology, a rash of articles has pointed out the causes of false findings, and a replication project that will dwarf Amgen’s is well underway. The drumbeat of bad news will only get louder.
What will be the consequences for you as an individual scientist? Field-wide reforms will certainly come, partly because of changes in journal and grant funder policies. Some of these reforms will be effective, but they will not arrive fast enough to halt the continued decline of the reputation of many areas.
In the interim, more and more results will be viewed with suspicion. This will affect individual scientists directly, including those without sin. There will be:
- increased suspicion by reviewers and editors of results in submitted manuscripts (“Given the history of results in this area, shouldn’t we require an additional experiment?“)
- lower evaluation of job applicants for faculty and postdoctoral positions (“I’ve just seen too many unreliable findings in that area“)
- lower scores for grant applications (“I don’t think they should be building on that paper without more pilot data replicating it“)
These effects will be unevenly distributed. They will often manifest as exaggerations of existing biases. If a senior scientist already had a dim view of social psychology, for example, then the continuing replicability crisis will likely magnify his bias, whereas his view of other fields that the scientist “trusts” will not be as affected by the whiff of scandal, at least for awhile- people have a way of making excuses for themselves and their friends.
But there are some things you can do to protect yourself. These practices will eventually become widespread. But get a head start, and look good by comparison.
- Preregister your study hypotheses, methods, and analysis plan. If you go on record with your plan before you do the study, this will allay the suspicion that your result is not robust, that you fished around with techniques and statistics until you got a statistically significant result. Journals will increasingly endorse a policy of favoring submitted manuscripts that have preregistered their plan in this way. Although websites set up to take these plans may not yet be available in your field, they are coming, and in the meantime you can post something on your own website, on FigShare perhaps, or in your university publicly accessible e-repository.
- Post your raw data (where ethically possible), experiment code, and analysis code to the web. This says you’ve got nothing to hide. No dodgy analyses, and you welcome the contributions of others to improve your statistical practices.
- Post all pilot data, interim results, and everything you do to the web, as the data come in. This is the ultimate in open science. You can link to your “electronic laboratory notebooks” in your grants and papers. Your reviewers will have no excuse to harbor dark thoughts about how your results came about, when they can go through the whole record.
The proponents of open science are sometimes accused of being naifs who don’t understand that secretive practices are necessary to avoid being scooped, or that sweeping inconvenient results under the rug is what you got to get your results into those high impact-factor journals. But the lay of the land has begun to change.
Make way for the cynics! We are about to see people practice open science not out of idealism, but rather out of self interest, as a defensive measure. All to the better of science.