Archive for the ‘evidencecharting’ Category
We (Hal Pashler, UCSD, hpashler at gmail and Alex Holcombe) are developing web-based software to help people interested in a scientific issue represent and inspect multiple competing hypotheses and the evidence which supports or fails to support each hypothesis.
One goal is to develop a tool that will help scientists wrap their minds around the state of a complex empirical debate more quickly and accurately than can be done by studying written review articles, commentaries, rebuttals, and so forth. Our software has been developed and tested in informal ways, and as our next step we want to get experts involved in real scientific debates to try it out and see how it works.
We are hoping to hire (probably in Southern California), part-time, a graduate student or similarly qualified person who is interested in scientific debate to oversee the implementation of some real debates. The person doesn’t have to be a software developer (we have one of those), but it would be good if they were generally tech-savvy and excited by internet tools. Clear communication and diplomatic skill will be essential in working with scientists trying out the software. Initial tasks will include writing documentation to guide real users in using the system, and helping to develop rules for structuring the use of the software by groups with very different views of controversial topics. We’d like to hire someone for six months at about 10-15 hours per week, to start. The project (currently funded mostly by NSF) could also potentially lead to publications, but our primary focus is on figuring out how to make the software maximally useful to people engaged in real debates. Anyone interested should email us.
The transmission of new scientific ideas and knowledge is needlessly slow:
|Journal subscription fees||Open access mandates|
|Competition to be first-to-publish motivates secrecy||Open Science mandates|
|Jargon||Increase science communication; science blogging|
|Pressure to publish high quantity means no time for learning from other areas||Reform of incentives in academia|
|Inefficient format of journal articles (e.g. prose)||Evidence charts, ?|
|Long lag time until things are published||Peer review post publication, not pre publication|
|Difficulty publishing fragmentary criticisms||Open peer review; incentivize post-publication commenting|
|Information contained in peer reviewers’ reviews is never published||Open peer review or publication of (possibly anonymous) reviews; incentivize online post-publication commenting|
|Difficulty publishing non-replications||Open Science|
UPDATE: Daniel Mietchen, in the true spirit of open science, has put up an editable version of this very incomplete table.
Our Evidence Charting project has received a commendation as a a Undergraduate Learning, Teaching, and Assessment Resource Commendation.
The commendation was received from the Australian Learning and Teaching Council (ALTC) in conjunction with the Teaching, Learning and Psychology Interest Group of the Australian Psychological Society (APS) and the Australian Psychology Educators Network (APEN).
Let me know if you’re interested in using evidence charts in your teaching.
For researchers, we’re using it as a research synthesis format, sort of like a very-concise review article. If you have a topic that you think it would work well for, drop me a line. We hope to publish some charts in a few scientific journals.
Bianca Hewes is a high-school English teacher and educational technology enthusiast that I was fortunate to connect with. She’s tried a lot of different websites in her search for tools to increase student engagement and improve instruction. With our evidencecharting site, she saw some potential to push her students towards writing better essays.
We originally designed our evidence-charting site to help scientists and science students evaluate evidence and their import for hypotheses. We hadn’t given much thought to how it could work with humanities classes, and I didn’t really know whether it would help Bianca’s students. Nevertheless, she gave it a go and had them create an evidencechart. The purpose was to help them analyze poems for evidence that the poet had been exploring the concept of ‘belonging’. Bianca did the following things:
- Created a group on the site for her students, and sent them a link that would register them on the site and pop them into her “MsHewes” group.
- Asked me to make her an administrator of the “MsHewes” group, which allowed her to see her students’ charts.
- Created a short online presentation that helped explain to the students what they were supposed to do, and how it might improve their essays.
- Walked the students through the website a bit in the school computer lab, and told them to complete their chart at home on their own computers (by simply logging into the evidencechart.com website).
- After the evidencechart due date had passed, from inside the website, she went through the students’ evidencecharts and entered some feedback on their charts, informing the students where they were going right and where they went wrong.
- The students then used their charts, plus the feedback Bianca had entered into them, to help plan and write their essay.
The results for the students’ essays were very encouraging- check out Bianca’s full report. Perhaps the most telling aspect is that she plans to use evidencecharts again for a future assignment, and some of the students themselves asked if they could use it for yet another project.
If Bianca’s experience inspires you to use the site, drop me a line. First, go right ahead and sign up for an account. If you think it could help you, I’ll set you up with an admin account and a group for your class so you can see your students’ charts all in one place inside the site.
Quodlibet is an obscure word that originally referred to a medieval event that included a debate. I haven’t been able to find much information about it, but here is a brief description from Graham (2007):
Beginning in the thirteenth century, quodlibets were a part of the academic program of the theology and philosophy faculties of universities… The first of the two days of the quodlibet was a day of debate presided over by a master who proposed a question of his own for discussion.
I’m interested in this because I rue the lack of debates in modern science. Perhaps it’s only an accident of history that real debates aren’t happening much nowadays.
Also during the quodlibet, according to Graham, the presiding master “accepted questions from anyone present on any subject and answers were suggested by the master and others.” Sounds like a blend of the unconference (1,2) and what we think of as a traditional debate.
We’ve created evidencechart.com as a way debating might be revived in a compact online form. However, we’re still working on the adversarial form of evidence charts designed especially for debating. Let me know if you’re interested in participating.
Graham, BFH (2007). Review of Magistri Johannis Hus: Quodlibet, Disputationis de Quolibet Pragae in Facultate Artium Mense Ianuario anni 1411 habitae Enchiridion. The Catholic Historical Review, Volume 93, Number 3, pp. 639-640.
Richard Feynman, in his 1974 cargo-cult science commencement address:
If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it…
In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.
Unfortunately, the average scientific journal article doesn’t follow this principle. I wouldn’t go so far as to say that the average article is just a sales job, but the emphasis is really on giving the information that favors the author’s theory. I say this based on my experience as a journal editor (for PLoS ONE), a reviewer (for a few dozen journals), and as a reader and author absorbing the norms of my field.
It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.
Again, I don’t think most scientists follow this principle. But evidence charts can yield more balanced scientific communication. Currently, formal scientific communication occurs almost entirely through articles— a long series of paragraphs. For someone to easily digest an article, there has to be a strong storyline running throughout, and the paper cannot be too long. Those requirements can tempt even one of the highest integrity to omit some inconvenient truths, to use rhetorical devices to sweep objections under the rug, to unashamedly advance the advantages of one’s own theory and that theory alone. If you don’t make a good sales job of it, the reader will just move on to a scientist who does, and you won’t have much impact. I don’t think the situation is universally that bad, but there is definitely a lot of this going on.
An evidence chart is more like a list of the pluses and minuses of various theories, and how the apparent minuses might be reconciled with the theory. There’s less room for rhetorical devices to obscure or manipulate things, and the form may be more suited for driving the reader to make up their mind for themselves. Of course, an individual scientist making a chart may still omit contrary evidence or make straw men of the opposing theories, but the evidence chart format may make this easier to recognize. And, we’re working on collaborative and adversarial evidence charts to bring the opposing views to the same table.
Email me if you’re interested in participating. I like to think that Feynman would be in favor of it.
Scientific theories are alive. They are debated and actively questioned. Scientists have differing views, strong and informed ones. However, the system of science tends to mask the debate. In the scientific literature, differences are aired, but rarely in a way that most people would recognize as a debate.
‘Debate’ evokes a vision of two parties concisely articulating their positions, disputing points, and rebutting each other. In the courtroom, in politics, and in school debating societies, one side will say or write something, and shortly after the other side will directly address the points raised. In science, this is not at all the norm. Occasionally something like a conventional debate does happen. A journal, after publishing an article attacking one group’s theory, will sometimes publish a reply from the advocates of the attacked theory. I relish such exchanges because in the usual course of things, it’s hard to make out the debate.
Typically, when one scientist publishes an article advocating a particular theory, those who read it and disagree won’t publish anything on the topic for six months or more. That’s just how the system works- to publish an article usually requires a massive effort involving work conducted over many months, if not years. Anything one does over that timescale is unlikely to be a focused rejoinder to another’s article. In any lab, there are many fish to fry, ordinarily something else was already on the boil, and the easiest meals are made by going after different fish than your peers. Most scientists are happy to skate by each other, perhaps after pausing for a potshot. The full debate is dodged.
Even when the work of two scientists work directly clashes, the debate is sometimes stamped out, and frequently heavily massaged as it passes through the research-and-publish pipelines. Debating somebody through scientific journal articles is like having an exchange with someone on another continent using 17th-century bureaucratic dispatches. When and if you hear back a year later, your target may have moved on to something else, or twisted your words, or showily pulverized a man of straw who looks a bit like you. You’re further burdened by niggling editors, meddling reviewers, irksome word limits, and the more pressing business of communicating your latest data.
The scientific literature obscures and bores with its stately rhetoric and authors writing at cross purposes. I’d like to see unadulterated points and counterpoints. With evidencecharts, we’re enabling this with a format adapted from intelligence analysis at the CIA. Most scientists won’t reshape what they do until academic institutional incentives and attitudes change. However, having good formats available to wrangle in should encourage some more debating around the edges.
If you’re a scientist ready to debate, and you think you might be able to talk a worthy opponent into joining you, send me a note! An upcoming iteration of our free evidencechart.com website will support mano a mano adversarial evidencecharts.
How do you get on top of the literature associated with a controversial scientific topic? For many empirical issues, the science gives a conflicted picture. Like the role of sleep in memory consolidation, the effect of caffeine on cognitive function, or the best theory of a particular visual illusion. To form your own opinion, you’ll need to become familiar with many studies in the area.
You might start by reading the latest review article on the topic. Review articles provide descriptions of many relevant studies. Also, they usually provide a nice tidy story that seems to bring the literature all together into a common thread- that the author’s theory is correct! Because of this bias, a review article may not help you much to make an independent evaluation of the evidence. And the evidence usually isn’t all there. Review articles very rarely describe, or even cite, all the relevant studies. Unfortunately, if you’re just getting started, you can’t recognize which relevant studies the author didn’t cite. This omission problem is the focus of today’s blog post.
Here’s five reasons why a particular study might not be cited in a review article, or in the literature review section of other articles. Possibly the author:
- considers the study not relevant, or not relevant to the particular point the author was most interested in pushing
- doesn’t believe the results of the study
- doesn’t think the methodology of the study was appropriate or good enough to support the claims
- has noticed that the study seems to go against her theory, and she is trying to sweep it under the rug
- considers the study relevant but had to leave it out to make room for other things (most journals impose word limits on reviews)
For any given omission, there’s no way to know the reason. This makes it difficult for even experts to evaluate the overall conclusions of the author. The author might have some good reason to doubt that study which seems to rebut the theory. The omission problem may be a necessary evil of the article format. If an article doesn’t omit many studies, then it’s likely to be extremely difficult to digest.
These problems no doubt have something to do with the irritated and frustrated feeling I have when I finish reading a review of a topic I know a lot about. Whereas if I’m not an expert in the topic, I have a different reaction. Wow, I think, somehow in these areas of science that I’m *not* in, everything gets sorted out so beautifully!
Conventional articles can be nice, but science needs new forms of communication. Here I’ve focused on the omission problem in articles. There are other problems, some of which may be intrinsic to use of a series of paragraphs of prose. A reader’s overall take on the view advanced by an article can depend a lot on the author’s skill in exposition and with the use of rhetorical devices.
Hal Pashler and I have created, together with Chris Simon of the Scotney Group who did the actual programming, a tool that addresses these problems. It allows one to create systematic reviews of a topic, without having to write many thousands of words, and without having to weave all the studies together with a narrative unified by a single theory. You do it all in a tabular form called an ‘evidence chart’. Evidence charts are an old idea, closely related to the “analysis of competing hypotheses” technique. Our evidencechart.com website is fully functioning and free to all, but it’s in beta and we’d love any feedback.
I’ll explain more about the website in future posts, as well as laying out further advantages of the evidence chart format for both readers and writers.