The venerable history of “rhetorical experiments”

Daryl Bem was already rather infamous before he provided, just this week, this excellent quote:

If you looked at all my past experiments, they were always rhetorical devices. I gathered data to show how my point would be made. I used data as a point of persuasion, and I never really worried about, ‘Will this replicate or will this not?’

The quote, from this piece on the history of the reproducibility crisis, has been posted and reposted, sometimes with an expression of anger, sometimes with a sad virtual head shake. The derision is well-deserved in the context of Bem’s final experiments, which attempted to show that ESP exists. But let’s examine what Bem was actually referring to – his earlier career as a social psychologist, a career in which he developed some influential theoretical ideas.

One could argue that Bem’s technique was no less scientific than Galileo’s. Yes, that Galileo, one of the first to use and to champion the experimental method. The following passage is from The Scientific Method in Galileo and Bacon:

Screen Shot 2017-05-20 at 20.03.59.png

The method described by Bem, then, is simply Galileo’s scientific method. Admittedly, Galileo was working at the beginning of the history of mechanics, meaning that there was much low-hanging fruit to be picked by generalizing from a few observations and theoretical insights. Bem was working nearly four hundred years later. And yet, much of Bem’s career is not far from the beginning of the history of the field of social psychology. Bem’s theory of attitude change was published less than two decades after Festinger first advanced the cognitive dissonance theory Bem apparently was reacting against.

I know next to nothing about Bem’s work, but I wouldn’t be surprised if he did gain good insights from intuition and theory, and was quite certain of the value of those insights entirely on that basis, and thus the data was indeed just an afterthought. Kahneman and Tversky too made some of their most important discoveries, I believe (e.g., loss aversion?), by a combination of introspection and reasoning.

I don’t think there’s much good to be said about using this “rhetorical experiments” approach for the effort to establish ESP as a real phenomenon, which Bem intended to be the capstone to his career (his work didn’t establish ESP, but ironically did help spark the reforms that are addressing the reproducibility crisis). I detest p-hacking, HARKing, and data fudging – I continue to be involved (e.g. 1, 2), in several initiatives to combat these phenomena, because I know they have yielded more than one patch of empirical ground, good solid stone on which to build a theory, but that has subsequently turned into a cenote; a deep sinkhole. The cavalier attitude toward methodological rigor implied in Bem’s comments is what gets us into a reproducibility crisis.

Still, propounding a theory on the basis of shoddy evidence has a glorious history in science. Don’t forget it. I’m not sure I want us to lose this data-poor, declamatory tradition. There’s value in getting ahead of the data, even when you don’t have the resources or the skills to collect the data that could falsify your theory. If we can create appropriate space to publish that sort of stuff without the author having to pretend that they have impeccable data, perhaps the pressure to cook the books will lessen.

Advertisements

8 thoughts on “The venerable history of “rhetorical experiments”

  1. Galileo and others like him gambled by saying “This is probably right, but if not, here’s the test you could run (or rerun) to falsify that claim”. That’s basically stage one of a replication report (https://cos.io/rr/). It sounds like you’re proposing publishing at that stage.
    But if that counts as a research contribution, then the question arises of who will invest the time and money into the second stage? And would people care about a second stage publication if it only confirmed the original prediction?
    What I can see happening is a disincentive towards actually verifying predictions because making them gets a person so much more notoriety.

    • I’m not sure how things should be arranged, but I like your idea. Something like a Study Swap could be set up to solicit experimentalists to actually run the experiments that passed the stage 1 threshold, who would be joint authors on the paper, or it would be published as part B of the stage 1 paper. Maybe there would be takers for this. One difficulty is that the journal might not want to go to the trouble of reviewing the stage 1 bit if they weren’t sure there would be an experimentalist to run it. Moreover, the theorist might not be the best person to detail the methodology that is needed to answer the question, but is required in stage 1 of an RR.

      Aside from that possibility, we do already have theory journals, but I suspect most experimentalists don’t read them. And maybe they’re right not to – who has time to sort the good theories from all those that are highly unlikely to be true? If theory-oriented types deliberately present bad data as good to get attention from the experimentalists, well that’s malpractice and we need to stop it somehow. I think many theory-heavy journals actually do have a bit of data in a lot of their articles (e.g., Consciousness & Cognition), so theorists do have those outlets for theories plus some data. Maybe nothing should be changed?

      • “Moreover, the theorist might not be the best person to detail the methodology that is needed to answer the question, but is required in stage 1 of an RR”

        That’s been my experience. Making experiments takes a combination of formal education and experience. Even with experience, its tough and can require a couple pilots to get right. That could lead to two options:
        (1) A journal of poorly thought-out experiment proposals
        (2) A journal where reviewers base their recommendations on whether they think it might work (totally subjective)

        I don’t see how either of those options would be beneficial to science.

  2. Great post. I definitely agree there’s tremendous value in making bold theoretical conjectures that are “ahead of the data”. The problem is when such conjectures are published based on shoddy evidence, and then are incorrectly presented and perceived as corroborated. This problem is then amplified in a hyper-competitive academic market whereby researchers can exploit this rhetorical sleight-of-hand. Indeed, this is precisely what has happened (and continues to happen) in social psychology where the main criterion for getting published in the so-called “top journals” (e.g., JPSP) is based on the “theoretical contribution” of a paper reporting a series of empirical studies “supporting” [sic] the theory.

    One way to resolve this dilemma, which I argued in an unpublished blog post from almost 3 years ago, is that as a profession, psychology needs to start distinguishing between “theoretical” versus “experimental psychologists”. This would mean having proper tenure-track “theoretical psychologist” positions where one’s main job is to creatively generate new falsifiable theories that can then be tested by sufficiently skilled meticulous experimentalists who actually do have “patience for experimental rigor”! Indeed, this is how the physics profession is arranged: one is either a theoretical physicist or experimental physicist, with very few exceptions, because to be competent in either of these roles requires so much specialized knowledge and skills. I’d say this is even more true in psychology.

    • I disagree that this is a good idea. I think that theoreticians benefit from direct contact with data. It’s necessary when designing a theory that is constrained by data to understand how noisy data is, and what are the steps involved in converting raw data to a finished product.

      • Interesting…, but what’s a counter-argument to the position advanced that to be competent as either a theoretical physicist OR experimental physicist, requires so much specialized knowledge and skills that you basically can’t do both. For example, in physics departments, they hire different faculty, managed in different areas, to do theoretical vs. experimental work on the SAME phenomenon, e.g. “High energy theory” vs. “High energy experiment”, “Condensed matter theory” vs. “Condensed matter experiment” https://www.princeton.edu/physics/research/

    • I like the idea of theoretical versus experimental psychologists (although I disagree that it’s more true that the skills overlap less in psychology than in physics – the technical know-how that goes into many modern physics experiments must be mind-blogging). It does seem like a good idea.

      Aside from the formal designation of jobs to reward theory, as I mentioned to Steve above, in the realm of journals we do already have theory journals, but I suspect most experimentalists don’t read them. And maybe they’re right not to – who has time to sort the good theories from all those that are highly unlikely to be true? If theory-oriented types deliberately present bad data as good to get attention from the experimentalists, well that’s malpractice and we need to stop it somehow. I think many theory-heavy journals actually do have a bit of data in a lot of their articles (e.g., Consciousness & Cognition), so theorists do have those outlets for theories plus some data. Maybe nothing in the journal space should be changed?

      • Ya I agree our journal space does already seem fairly well organized along theoretical vs. experimental lines. But I think we need to consider this a lot more in terms of our profession (professional societies), hiring, departmental organization (as mentioned in my Princeton department of physics example above where different faculty [theoretical vs. experimental] work in different areas on the SAME topic). Then, this might have implications for funding and funding agencies…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s