Make evidence charts, not review papers

How do you get on top of the literature associated with a controversial scientific topic? For many empirical issues, the science gives a conflicted picture. Like the role of sleep in memory consolidation, the effect of caffeine on cognitive function, or the best theory of a particular visual illusion. To form your own opinion, you’ll need to become familiar with many studies in the area.

You might start by reading the latest review article on the topic. Review articles provide descriptions of many relevant studies. Also, they usually provide a nice tidy story that seems to bring the literature all together into a common thread- that the author’s theory is correct! Because of this bias, a review article may not help you much to make an independent evaluation of the evidence. And the evidence usually isn’t all there. Review articles very rarely describe, or even cite, all the relevant studies. Unfortunately, if you’re just getting started, you can’t recognize which relevant studies the author didn’t cite. This omission problem is the focus of today’s blog post.

Here’s five reasons why a particular study might not be cited in a review article, or in the literature review section of other articles. Possibly the author:

  • considers the study not relevant, or not relevant to the particular point the author was most interested in pushing
  • doesn’t believe the results of the study
  • doesn’t think the methodology of the study was appropriate or good enough to support the claims
  • has noticed that the study seems to go against her theory, and she is trying to sweep it under the rug
  • considers the study relevant but had to leave it out to make room for other things (most journals impose word limits on reviews)

For any given omission, there’s no way to know the reason. This makes it difficult for even experts to evaluate the overall conclusions of the author. The author might have some good reason to doubt that study which seems to rebut the theory. The omission problem may be a necessary evil of the article format. If an article doesn’t omit many studies, then it’s likely to be extremely difficult to digest.

These problems no doubt have something to do with the irritated and frustrated feeling I have when I finish reading a review of a topic I know a lot about. Whereas if I’m not an expert in the topic, I have a different reaction. Wow, I think, somehow in these areas of science that I’m *not* in, everything gets sorted out so beautifully!

Conventional articles can be nice, but science needs new forms of communication. Here I’ve focused on the omission problem in articles. There are other problems, some of which may be intrinsic to use of a series of paragraphs of prose. A reader’s overall take on the view advanced by an article can depend a lot on the author’s skill in exposition and with the use of rhetorical devices.

EvidenceChart.com

Hal Pashler and I have created, together with Chris Simon of the Scotney Group who did the actual programming, a tool that addresses these problems. It allows one to create systematic reviews of a topic, without having to write many thousands of words, and without having to weave all the studies together with a narrative unified by a single theory. You do it all in a tabular form called an ‘evidence chart’. Evidence charts are an old idea, closely related to the “analysis of competing hypotheses” technique. Our evidencechart.com website is fully functioning and free to all, but it’s in beta and we’d love any feedback.

Sleep and memory consolidation evidence chart

part of an evidence chart by Denise Cai (created at evidencechart.com)


I’ll explain more about the website in future posts, as well as laying out further advantages of the evidence chart format for both readers and writers.

Advertisements

7 thoughts on “Make evidence charts, not review papers

  1. Nice. We just had a failure to replicate/interesting null result on autism & action perception. It was hard to publish (and I probably should have spent my time getting those millions in grants that I’m supposed to be getting, instead of publishing this sort of thing) but it is accepted (Plos One). Reviewer said the results were too unexpected to be true, but that was perfect for us to say, well if reviewers will keep saying it’s too unexpected it’ll never become expected, will it? So can we get the data out please? Anyway in trying to make the paper contribute something other than a null result we did a table summarizing published results in the field. Wish I knew about this when we were making that table. Will surely keep in mind.

  2. hi Ayse,
    Much respect to you for your persistence in getting that autism null result published, those in the field should be thanking you and recommending acceptance, not rejection. Sadly, I think a lot of labs are sitting on null autism results- mine is one 😦
    That’s also a real shame about most people not publishing that sort of lit-review table. Every time someone makes that sort of table to guide their research and then doesn’t publish it or at least put it on a wiki or evidencechart website, it means the next new researcher who comes along will probably have to spend several dozen hours essentially re-creating it!

  3. This is fantastic, I think. At least I’m all for the general approach and agree with the need. In fact, I’ve created a spreadsheet of all the experiments in my field, along with their primary result, for a similar purpose.

    http://www.functionalneurogenesis.com/blog/2010/01/a-list-of-studies-that-relate-adult-hippocampal-neurogenesis-to-behavior/

    I don’t agree with all the studies I’ve included but I couldn’t really see any other way than to include them all and let the readers decide. In a similar vein I’ve created some comprehensive visual layouts of data:

    http://www.functionalneurogenesis.com/blog/2010/03/everything-you-always-wanted-to-know-about-neurogenesis-timecourses-but-were-afraid-to-ask/

    I think your evidencecharts could also provide a way to take in a lot of information at once. Will give it a shot!

    Jason

  4. Hi Alex, I came back to tell you that I let my students know about this website. They’re writing papers as part of a brain disorders and cognition class (upper division elective). I told them to let me know if they use a chart. We’ll see. I also advised one of my grad students to use a chart in his lit review. Will let you know if these efforts lead to any useful charts & also post them online.

    And I just saw your reply regarding autism. Thanks for your comments, really! Sometimes I believe persistence is the most important skill in this business. Certainly doesn’t come easily to me… Anyway, in case you are trying to publish your autism findings or other interesting null effects, here’s how I responded to the critique that this was an unexpected null effect (after pointing out that the same experiment does show effects in other groups so it’s not that it’s simply a crappy experiment that does not measure anything): “It should be added that publication bias for positive results plays a role in how “expected” our results are. Since null data are more difficult to publish (cf. Dwan et al, 2008, Plos ONE; Fanelli, 2010, Plos ONE), our results may not be that unexpected after all. There are other studies that found no deficits for biological motion perception in ASD, as now presented more clearly in Table S1. We know of at least one other group who failed to find a difference (at present unpublished). This was one of the reasons we think it is important to publish our “null” result. If such data do not get communicated, the field will keep on expecting these results to be replicated.” Essentially, we cited the file drawer effect and produced a synopsis of previous findings (can be an evidence chart)… It worked! I am now a proud owner of a published null result. I probably shouldn’t be spending my time trying to publish null results that will not advance my career significantly… But I felt think it could be useful to put some of these out there.

  5. That’s all excellent, Ayse, thanks. Please let me know how it goes with the evidence charting- especially if it’s a failure! In the spirit of null results, we learn more from the failures than the successes 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s