How do you get on top of the literature associated with a controversial scientific topic? For many empirical issues, the science gives a conflicted picture. Like the role of sleep in memory consolidation, the effect of caffeine on cognitive function, or the best theory of a particular visual illusion. To form your own opinion, you’ll need to become familiar with many studies in the area.
You might start by reading the latest review article on the topic. Review articles provide descriptions of many relevant studies. Also, they usually provide a nice tidy story that seems to bring the literature all together into a common thread- that the author’s theory is correct! Because of this bias, a review article may not help you much to make an independent evaluation of the evidence. And the evidence usually isn’t all there. Review articles very rarely describe, or even cite, all the relevant studies. Unfortunately, if you’re just getting started, you can’t recognize which relevant studies the author didn’t cite. This omission problem is the focus of today’s blog post.
Here’s five reasons why a particular study might not be cited in a review article, or in the literature review section of other articles. Possibly the author:
- considers the study not relevant, or not relevant to the particular point the author was most interested in pushing
- doesn’t believe the results of the study
- doesn’t think the methodology of the study was appropriate or good enough to support the claims
- has noticed that the study seems to go against her theory, and she is trying to sweep it under the rug
- considers the study relevant but had to leave it out to make room for other things (most journals impose word limits on reviews)
For any given omission, there’s no way to know the reason. This makes it difficult for even experts to evaluate the overall conclusions of the author. The author might have some good reason to doubt that study which seems to rebut the theory. The omission problem may be a necessary evil of the article format. If an article doesn’t omit many studies, then it’s likely to be extremely difficult to digest.
These problems no doubt have something to do with the irritated and frustrated feeling I have when I finish reading a review of a topic I know a lot about. Whereas if I’m not an expert in the topic, I have a different reaction. Wow, I think, somehow in these areas of science that I’m *not* in, everything gets sorted out so beautifully!
Conventional articles can be nice, but science needs new forms of communication. Here I’ve focused on the omission problem in articles. There are other problems, some of which may be intrinsic to use of a series of paragraphs of prose. A reader’s overall take on the view advanced by an article can depend a lot on the author’s skill in exposition and with the use of rhetorical devices.
Hal Pashler and I have created, together with Chris Simon of the Scotney Group who did the actual programming, a tool that addresses these problems. It allows one to create systematic reviews of a topic, without having to write many thousands of words, and without having to weave all the studies together with a narrative unified by a single theory. You do it all in a tabular form called an ‘evidence chart’. Evidence charts are an old idea, closely related to the “analysis of competing hypotheses” technique. Our evidencechart.com website is fully functioning and free to all, but it’s in beta and we’d love any feedback.
part of an evidence chart by Denise Cai (created at evidencechart.com)
I’ll explain more about the website in future posts, as well as laying out further advantages of the evidence chart format for both readers and writers.