A partial solution to the problem of predatory journals, and a new index of journal quality

On twitter I floated this partial solution to the problem of predatory journals,

which I’ll add to here.

If you’ve been in a field for a couple years, then you’re familiar with the journals that most of its research is published in. If you came across a journal that was new to you, you’d probably scrutinise its content and its editorial board before publishing in it, and you’d probably notice if something were a bit dodgy about that journal.

But many users of scientific research do not have much familiarity with journals of particular specialties and their mores. Sadly, this includes some of the administrators that make decisions about the careers and grants of scholars. It also includes many in countries without a long tradition of being fully integrated with international scholarship, who are now trying to rapidly join the community of scholars published in English. Journalists, medical professionals, wikipedia authors, and policymakers may not have the experience to distinguish good journals from illegitimate ones.

Unfortunately, there is no one-stop shop that scholars, administrators, journalists, or policymakers can consult for an indication of how legitimate a journal is. Predatory journals are common, charging researchers hundreds to publish an article with little to no vetting by reviewers and shoddy publishing service. Their victims may predominantly come from countries trying to jump into international publishing in English for the first time, some of whom receive monetary rewards from their universities for doing so. There are proprietary journal databases like journal citation reports of Thomson Reuters, but they cost money and can take years to index new journals. Jeffrey Beall used to maintain an (arguably biased) free list of predatory journals, but for various reasons including legal ones (see 1, 2) blacklists are probably a bad idea.

What follows is an automated way to create a list of legitimate journals, in other words a whitelist for people to consult. It can’t be fully automated until the scholarly community institutes a few changes, but these are changes that arguably are also needed for other reasons.

Non-predatory, respected journals nearly universally have an editorial board of scholars who have published a significant amount of research in other respected journals. The whitelist would need to establish whether those scholars exist and have published in (other) reputable journals.

Journals have rapidly taken up the ORCiD system of unique researcher identities, asking authors who submit papers to enter their ORCid number. They should also do this for their editorial board members – journals should add ORCiD numbers to their editorial board list.

An organization (such as SHERPA, that maintains the SHERPA/ROMEO list of journals and their open access policies) could then pull the editors’ publication lists from ORCiD and create a score, with a threshold for the score indicating that a goodly proportion of the editors have published in other reputable journals. To get this started, existing whitelists of legitimate journals would be used to make sure the journals the editorial board members published in were legitimate.

A new index of journal quality

The score could also be used as a new indicator of the esteem of journals – if the journal has only highly-cited researchers on its editorial board, it is probably a prestigious journal (science badly needs new indicators of quality, however flawed, to reduce reliance on citation metrics like impact factor). Journals could thus be ranked by the scholarly impact of its editorial board members. This would allow new journals to immediately have prestige without having to wait the years necessary to establish a strong citation record.

Currently, the reliance on impact factor and its long time lag imposes a high barrier to entry, preventing innovative publishers and journals from competing with older journals. This is also a key obstacle for getting editorial boards to decamp from publisher-owned subscription titles and create a new open access journals because, when new, the journal has no citation metrics.

A remaining difficulty is that some predatory journal webpages list names of researchers on their editorial board who never agreed to be listed. If ORCiD would add a field for researchers’ digital signature public key, and researchers started using digital signatures, then journals could on their webpage (and even in their article PDFs) include a signed message from each editor certifying that they agreed to be on the editorial board.

UPDATE 2 Feb 2018: ORCID has already been in the process of adding a field for editorial affiliation 


Psychonomics 2017, our presentations

Was that a shift of attention or binding in a buffer?

Charles J. H. Ludowici; Alex O. Holcombe (presented by Alex)

3:50-4:05 PM Friday 10 November, West Meeting Room 118-120


Cueing a stimulus can boost the rate of success reporting it. This is usually thought to reflect a time-consuming attention shift to the stimulus location. Recent results support an additional process of “buffering and binding” – stimulus representations have persisting (buffered) activations and one is bound with the cue. Here, an RSVP stream of letters is presented, with one letter cued by a ring. The presentation time, relative to the cue, of the letters reported are aggregated across trials. The resulting distribution appears time-symmetric and includes more items before the cue than are predicted by guessing. When a central cue is used rather than the peripheral ring, the data no longer favor the symmetric model, suggesting an attention shift rather than buffering and binding. To explore the capacity of buffering in the peripheral cue condition, we vary the number of streams, documenting changes in the temporal dispersion of letters reported and the time of the letter most frequently reported.


Negotiating a capacity limit in visual processing: Are words prioritised in the direction of reading?

by Kim Ransley, Sally Andrews, and Alex Holcombe

poster #1208 [revised title] 6-7.30pm Thursday 9 November

Experiments using concurrent rapid serial visual presentation (RSVP) of letters have documented that the direction of reading affects which of two horizontally-displaced streams is prioritised ­— in English, the letters of the left stream are better reported but this is not the case in Arabic.  Here, we present experiments investigating whether this left bias occurs when the stimuli are concurrently presented English words.  The first experiment revealed a right bias for reporting one of two simultaneously and briefly-presented words (not embedded in an RSVP stream), when the location of one of the words was subsequently cued.  An ongoing experiment directly compares spatial biases in dual RSVP of letters with those in dual RSVP of words in the same participants.   These findings have implications for understanding the relative roles of hemispheric lateralisation for language, and attentional deployment during reading. UPDATE: THE RSVP EXPERIMENTS REPLICATE THE DIFFERENCE BETWEEN LETTERS AND WORDS BUT GO ON TO SHOW THE SAME BIAS (UPPER VISUAL FIELD) WHEN THE STIMULI ARE ARRAYED VERTICALLY RATHER THAN HORIZONTALLY, CONSISTENT WITH READING ORDER. Come by to hear our exciting conclusion!

Can Systems Factorial Technology Identify Whether Words are Processed
in Parallel or Serially?

Charles J. H. Ludowici; Alex O. Holcombe

6:00-7:30 PM Friday 10 November poster session


To determine the capacity, architecture (serial or parallel), and stopping rule of human processing of stimuli, researchers increasingly use Systems Factorial Technology (SFT) analysis techniques. The associated experiments typically use a small set of stimuli that vary little in their processing demands. However, many researchers are interested in how humans process kinds of stimuli that vary in processing demands, such as written words. To assess SFT’s performance with such stimuli, we tested its ability to identify processing characteristics from simulated response times derived from parallel limited-, unlimited- and super-capacity linear ballistic accumulator (LBA) models, which mimicked human response time patterns from a lexical decision task. SFT successfully identified system capacity with <600 trials per condition. However, for identifying architecture and stopping rule, even with 2000 trials per condition, the power of these tests did not exceed .6. To our knowledge, this is the first test of SFT’s ability to identify the characteristics of systems that generate RT variability similar to that found in human experiments using heterogeneous stimuli. The technique also constitutes a novel form of power analysis for SFT.

Ethics and IRB burden

Hoisted from the comments on Scott Alexander’s ethics/IRB nightmare, an insight I hadn’t seen before:
Most research admins are willing to admit the “winging it” factor among themselves. For obvious reasons, however, you want the faculty and/or researchers with whom you interact to respect your professional judgment…
So of course you’re not going to confess that you don’t really have a clue what you’re doing; you’re just puzzling over these regulations like so many tea leaves and trying to make a reasonable judgment based on your status as a reasonably well-educated and fair-minded human being.
What this means in practice is almost zero uniformity in the field. Your IRB from hell story wasn’t even remotely shocking to me. Other commenters’ IRB from just-fine-ville stories are also far from shocking. Since so few people really understand what the regulations mean or how to interpret them, let alone how to protect against government bogeymen yelling at you failing to follow them, there is a wild profusion of institutional approaches to research administration, and this includes huge variations in concern for the more fine-grained regulatory details. It is really hard to find someone to lead a grants or research administration office who has expertise in all the varied fields of compliance now required. It’s hard to find someone with the expertise in any of the particular fields, to be honest.
And, to bring home again the absurdity:
Nobody expects any harm from asking your co-worker “How are you this morning?” in conversation. But if I were to turn this into a study – “Diurnal Variability In Well-Being Among Office Workers” – I would need to hire a whole team of managers just to get through the risk paperwork and the consent paperwork and the weekly reports and the team meetings. I can give a patient twice the standard dose of a dangerous medication without justifying myself to anyone. I can confine a patient involuntarily for weeks and face only the most perfunctory legal oversight. But if I want to ask them “How are you this morning?” and make a study out of it, I need to block off my calendar for the next ten years to do the relevant paperwork.

A major math journal flips to Fair Open Access

Akihiro Munemasa, Christos Athanasiadis, Hugh Thomas, and Hendrik van Maldeghem share the chief editor role at a journal that’s like many others across mathematics and the sciences. The Journal of Algebraic Combinatorics is a subscription journal published by one of the big, highly-profitable publishers (Springer Nature). But they haven’t been happy with the fees Springer charges for people to read their articles.

At the end of the year, all four will resign, as will nearly everyone on the editorial board. They’re starting a new, open access, free-to-authors journal. The new journal is called Algebraic Combinatorics and will follow Fair Open Access principles. The model for this flip is the precedent of journals like Lingua, where after the editors and editorial board abandoned ship, the community of researchers followed, withdrawing many of their submitted manuscripts from the old journal and submitting them and their new manuscripts to the new journal, Glossa. The reason this happens is that the real value in a high-quality journal like the Journal of Algebraic Combinatorics and (formerly) Lingua  does not come from the journal’s publisher but rather the scholars who send the journal their work, review others’ work, and serve as editors.

The Centre Mersenne will provide publishing services, and the organisation MathOA has helped with the transition. MathOA is a sister organisation to PsyOA, which I am chair of. We hope that the information resources we’ve created at PsyOA, MathOA, and the umbrella site FairOA will help many more communities of scholars to switch to Fair Open Access.

One of the obstacles to flipping is fear of the unknown. A specific fear is that other journal management systems (JMS) might not be as full-featured or easy to use as the JMS provided by one’s current publisher. To this end, with a few scholars at PKP (creator of Open Journal Systems) and elsewhere, we would like to do a project comparing and contrasting the features and ease of use of different JMSes. This might be a good project for a master’s or PhD student in library sciences. If you have some relevant expertise and have such students, please get in touch.

The Fair Open Access principles

Mark Wilson and I wrote the below piece for the Australian Open Access Support Group. The principles we lay out guide our vision for working to create an open access future governed by the community of scholars, not publishers.

In March 2017 a group of researchers and librarians interested in journal reform formalized the Fair Open Access Principles.

The basic principles are:

  1. The journal has a transparent ownership structure, and is controlled by and responsive to the scholarly community.
  2. Authors of articles in the journal retain copyright.
  3. All articles are published open access and an explicit open access licence is used.
  4. Submission and publication is not conditional in any way on the payment of a fee from the author or its employing institution, or on membership of an institution or society.
  5. Any fees paid on behalf of the journal to publishers are low, transparent, and in proportion to the work carried out.

Detailed clarification and interpretation of the principles is provided at the site.

Here, instead, we put these principles into context and explain the mFAIRoaPrinciplesotivation behind them.

Our basic thesis is that the current situation in which commercial publishers own the title to journals is untenable. Many existing journals were begun by scholars but subsequently acquired by Elsevier, Springer, Wiley, Taylor & Francis and other commercial publishers. These publishers now have a strong incentive to oppose any reform of the journal that would benefit the community of authors, editors and readers but not help the short-term interests of its own shareholders. We have seen several examples of this in recent years the Wikipedia entry for Elsevier, for example, collects many examples of malfeasance.

The evidence is now overwhelming that the interests of large commercial publishers are not well aligned with the interests of the research community or the general public. Thus Principle 1 is key. Changing a journal to open access but allowing it to be bought easily by Elsevier, for example, would be a pointless exercise. We must decouple ownership of journals from publication services. This will allow editorial boards to shop around for publishers, who must compete on price and service quality rather than exploit a monopolistic position. In other words, a functioning market will arise. Also, journals will have more chance to innovate by not being locked into inflexible and outdated infrastructure.

Principle 2 (authors retaining copyright) seems obvious. Large publishers have claimed that having authors assign them copyright to articles protects the authors. We know of no case where this has happened. However, publishers have prevented authors from reusing their own work!

Open access is of course the main goal and thus the associated principle (Principle 3) needs little explanation. Some authors appear to believe that posting occasional preprints/postprints on their own website is as good as true open access. This is not the case – some of the reasons are licence issues, confusion about the version of record, lack of machine readability, inconsistent searchability, and unreliable archiving.

APCs (Article Processing Charges) are a common feature of open access journals and a main source of income, particularly for “predatory” journals whose sole function is to make money for unscrupulous owners. Large commercial publishers have responded to pressure by offering OA if an APC is paid. These APCs are typically well over US$1000. The fact that over 60% of journals in DOAJ do not charge any APC, and the low APCs of some high quality newer full service publishers (such as Ubiquity Press) shows that there is much room for improvement. In many fields there is considerable resistance to authors paying APCs directly. For example in a recent survey of mathematicians that we undertook, published in the European Mathematical Society Newsletter,
about a quarter of respondents declared APCs unacceptable in principle and another quarter said they should be paid by library consortia. We do not deny that there are costs associated with OA publishing, and are not advocating every journal run using self-hosted OJS and volunteer time (although there are many successful and long-lived journals of that type, like Journal of Machine Learning Research or Electronic Journal of Combinatorics, and we feel it still has untapped potential). We aim to ensure that unnecessary barriers are not erected for authors, in particular fees – Principle 4. Any payments on behalf of authors should be made in an automatic way – the idea is for consortia of institutions to fund reasonable operating costs of OA journals directly.

Principle 5 (reasonable and transparent costs) will automatically hold if the journal is sufficiently well run and independent as described by Principle 1, and is included in order to reinforce the point that a competitive market is our main goal rather than wasting public money to maintain the current profits of publishers. Recently, initiatives such as OA2020 have emphasized large-scale conversion of subscription journals to OA. We believe that if the ownership of the journals isn’t simultaneously changed, there will remain little incentive for publishers to keep prices down. If a researcher believes that a paper in Nature will make her career, will she be denied this by the APC-paying agency if Nature choose to charge a premium APC? In addition, if journal ownership is not taken from the publishers, they can lock us into their existing technologies, which hinders innovation in scholarly communication.

We are working on disciplinary organizations aimed at helping journals flip from a subscription model to Fair OA, and have so far started LingOA,  MathOA and PsyOA. We plan a Fair Open Access Alliance which will include independent journals already practising FairOA principles, flipped journals, and other institutional members with a strong belief in FairOA. The idea is to share resources and harmonize journal practices. We hope that these activities will yield a way forward that avoids sterile debates about Green vs Gold OA. We welcome feedback and offers of help in our wider effort to convert the entire scholarly literature to Fair Open Access.


Mark C. Wilson is Senior Lecturer in Computer Science at University of Auckland, and founding member of MathOA Foundation.

Alex O. Holcombe is an Associate Professor of Psychology at The University of Sydney and is a founding member of PsyOA (PsyOA.org).

Publishers prioritize “self-plagiarism” policing over allowing new discoveries

Elsevier and other publishers’ ability to detect “self-plagiarism” is an instance of text mining the world’s scientific literature. Over at two vision researcher mailing lists, there is much irritation at being asked to remove sentences that duplicate sentences that one wrote in previous papers, to describe for example the methodology of a study.

Tom Wallis pointed out that the automated text duplication checks also can be useful for detecting data duplication and fraud. Unfortunately it cannot easily be used for that by others – Elsevier shuts down independent researchers who use their journal subscriptions to investigate fraud (http://onsnetwork.org/chartgerink/2015/11/16/elsevier-stopped-me-doing-my-research/  ; http://www.nature.com/news/text-mining-block-prompts-online-response-1.18819).

Text mining the scientific literature could yield thousands of discoveries, about both fraud and new connections between molecules, genes, and diseases, but it can’t be done when publishers like Elsevier own the content and are trying to monetize it all for themselves (https://blogs.ch.cam.ac.uk/pmr/2017/07/11/text-and-data-mining-overview). “Self-plagiarism” also puts publishers at legal risk as a result of them publishing all our articles under restrictive copyright – it can be a copyright violation for them to publish text that happens to be identical to an earlier paper by the same author that happens to have been published by a different publisher. In an email from a publisher to Professor Peter Tse, the issue was framed as protecting the author but there was also this sentence: “Another issue to be borne in mind is the matter of copyright in extensive text duplication.”

Thus the traditional system of publishers owning the copyright to our work is both preventing new discoveries (which has to wait until the publishers find a way to use text mining to maintain or increase their profits) and creating ridiculous busywork for ourselves.  Yesterday I attended a university press publishing conference where Kevin Stranack of demo’ed Open Journal Systems version 3, which has already been released and looks significantly easier to use than ScholarOne/Manuscript Central, the system that expensive subscription journals use. The existence of OJS3 allows the creation of journals at very low cost (it already underpins thousands of journals, such as Glossa, which flipped from Elsevier) Unfortunately I seem to be the only researcher at the conference, but I’m tweeting about it and will add some related information to FairOA.org.


What now? Some lessons from the APA take-downs

The APA’s take-down notices have reminded us that our published articles are owned by them.

While the APA has claimed that the initiative was simply to “to preserve the scientific integrity of the research we publish and provide a secure web environment to access the content”, the APA’s $10 million a year of subscription income might have more to do with it. Indeed, the APA may be reliant on this income. If so, the APA’s interests are in conflict with the interest of scientists, clinicians, and research funders. A top priority of these groups is maximizing the dissemination of knowledge.

By dissemination of knowledge, I don’t just mean individuals being able to read articles after downloading a PDF. To allow improvements to scholarly infrastructure,  including a future of automated error checking, fact mining, and meta-analysis, the authoritative version of scientific articles should not be locked behind paywalls.

On this front, let’s give APA a bit of credit – they have investigated a transition away from subscription journals. The APA has started a fully open access journal (which has waived the APC fees for the first year), and they do allow full open access for a fee in all of their journals. However, the fee is relatively high at $4,000. To enable sustainable open access, we need the cost to be lower. If $4,000 is an indication of APA’s costs, they are not where we should be putting our hopes for the future.

What should researchers do?

In a policy that is more liberal than that of many publishers, the APA allows posting author-formatted manuscripts that contain all the revisions made during the review process.

Posting author-formatted manuscripts is not the final solution to anything, but it can speed progress towards a solution. I refer to posting manuscripts to database-indexed repositories such as university repositories or PsyArxiv.org (disclosure: I am an [unpaid] member of PsyArxiv’s Steering Committee). In contrast, posting to private entities such as ResearchGate and Academia.edu may not be a good idea: they cannot be trusted to keep things completely open – like SSRN, they may be bought by Elsevier or start locking things down to monetize their content.

How does posting our manuscripts advance a long-term solution? First, as more and more researchers habitually post their manuscripts, more universities become comfortable cancelling their journal subscriptions, forcing publishers to move towards other models.

Second, the repositories that researchers post their articles are themselves likely to become an integral part of the publishing future. The emerging overlay journals, for example, are simply webpages curated by editorial boards that link to articles in repositories. The editorial board solicits peer review of submitted articles (which needn’t be uploaded beyond being already in the repository as a preprint), which the authors then revise based on the reviews and once the editor is happy, the revised version – still hosted by the repository – is “published” on the journal webpage. The Center for Open Science is currently working on creating a peer review module for OSF, PsyArxiv, and their other repositories to facilitate this.

Overlay journals are a viable solution to the low-cost open access publishing problem, and use of Open Journal Systems as the editorial submission and peer review management system is another. OJS is already used by thousands of journals, at low cost. However, low cost does not mean zero cost. The costs, both in hours of labor and in technology, are substantial under any model. If the money won’t be coming from subscriptions, where will it come from?

Charging fees directly to authors or their funders has worked for many open access journals, but this is not a comprehensive solution, as many authors do not have funding. This is one reason that in our Fair Open Access principles, we stipulate that authors should not be charged.

Universities and research funders should come together to pool resources to support scholarly communication infrastructure. This is already happening in certain initiatives such as the Open Library of the Humanities. More than 200 universities are members of OLH and provide funds to support the 14 journals they publish. Importantly, for OLH journals the publisher (Ubiquity) is a service provider. They do not own the journal.

Authors and editors can organize editorial boards to resign from publisher-owned journals and join an existing open access journal or create another, as has already happened many times. We provide some information resources for this at PsyOA. Just this year, the European Society for Cognitive Psychology abandoned their corporate subscription-based publisher and started Journal of Cognition, which uses Ubiquity and charges relatively low APCs.

Keep the conversation sparked by APA going and let’s create a fully open access and sustainable future.


The APA and publishing costs

This is a follow-up to my previous post, which was about the APA issuing take-down notices and how you can post preprints to keep your science open.

In a survey last year asking vision researchers’ priorities for journals, the top responses included:

“open access”, “Full academic or professional society control” ; “Low cost” ; and “transparent financial accounts”.

Notably, APA is one of the few publishers in the area of perception that has not provided a response to the concerns highlighted by the survey results. Most articles they publish are available only by subscription but APA makes select articles fully open access for a $3,000 fee typically charged to the authors or their funders. That is a relatively high fee.

From their annual report we know the APA receives $11 million/year in journal subscription revenue, and $67 million in other licensing revenue but the report does not break down the $17 million in “publication production” costs, so it is difficult to evaluate how they are using the $3,000 open access fees.

Some of us, and many of our funders, would like to see science transition to open access publishing, in which authors do not sign their copyright away. We’d also like to see low or no author fees. Changing existing journals, such as the APA journals, is particularly hard because often the publisher owns the journal, even though the editorial board and authors provide all the content that makes the journal what it is. PsyOA is an initiative staking out the principles that we call Fair Open Access and provides information to editors and scholarly societies interested in moving their journal from subscription basis to open access.  Another part of the solution is to use and support new infrastructure for scholarly communication that is not reliant on subscription publishers, such as PsyArXiv and BioRXiv.  Some efforts are underway to create a peer review module to allow journals to use that infrastructure, which is expected to result in low-cost and modern open access journals.


Is the APA trying to take your science down?

Dear Psychologist,

If you have published in an APA (American Psychological Association) journal and posted the article PDF to a website, you may have already received an email from APA lawyers asking you to take that PDF down:

Dear Sir/Madam,

I write on behalf of the American Psychological Association (APA) to bring to your attention the unauthorized posting of final published journal articles to your website. Following the discussion below, a formal DMCA takedown request is included with URLs to the location of these articles.

The APA is likely within their legal rights here, but there is a way to continue making your work freely available to the world. Upload the final accepted version of your article (your final revised Word document, if you wrote your paper in Word) to your website or, better, to the university repository or to another repository such as PsyArXiv (I am on the Steering Committee of PsyArXiv). Your personal website is not the best option because personal websites tend to be transient, not always properly indexed by the likes of Google Scholar, and some publishers don’t allow posting to personal websites but do allow posting to repositories.

The APA policy allowing upload to repositories says that you must add the following note to the version you post:

© 2016 American Psychological Association. This paper is not the copy of record and may not exactly replicate the authoritative document published in the APA journal. Please do not copy or cite without author’s permission. The final article is available, upon publication, at: [ARTICLE DOI]

As the note says, the APA owns the copyright to your paper, not you. Many of us would like to see science transition to open access publishing, in which we do not sign our copyright away. You have probably noticed some success on this front in the domain of starting new journals (e.g., the open-access journal PLOS ONE rapidly became the largest journal in the world). Changing existing journals is harder because often the publisher owns the journal, even though the editorial board and authors provide all the content that makes the journal what it is. PsyOA is an initiative staking out the principles that we call Fair Open Access and provides information to editors and scholarly societies interested in moving their journal from subscription basis to open access.



The venerable history of “rhetorical experiments”

Daryl Bem was already rather infamous before he provided, just this week, this excellent quote:

If you looked at all my past experiments, they were always rhetorical devices. I gathered data to show how my point would be made. I used data as a point of persuasion, and I never really worried about, ‘Will this replicate or will this not?’

The quote, from this piece on the history of the reproducibility crisis, has been posted and reposted, sometimes with an expression of anger, sometimes with a sad virtual head shake. The derision is well-deserved in the context of Bem’s final experiments, which attempted to show that ESP exists. But let’s examine what Bem was actually referring to – his earlier career as a social psychologist, a career in which he developed some influential theoretical ideas.

One could argue that Bem’s technique was no less scientific than Galileo’s. Yes, that Galileo, one of the first to use and to champion the experimental method. The following passage is from The Scientific Method in Galileo and Bacon:

Screen Shot 2017-05-20 at 20.03.59.png

The method described by Bem, then, is simply Galileo’s scientific method. Admittedly, Galileo was working at the beginning of the history of mechanics, meaning that there was much low-hanging fruit to be picked by generalizing from a few observations and theoretical insights. Bem was working nearly four hundred years later. And yet, much of Bem’s career is not far from the beginning of the history of the field of social psychology. Bem’s theory of attitude change was published less than two decades after Festinger first advanced the cognitive dissonance theory Bem apparently was reacting against.

I know next to nothing about Bem’s work, but I wouldn’t be surprised if he did gain good insights from intuition and theory, and was quite certain of the value of those insights entirely on that basis, and thus the data was indeed just an afterthought. Kahneman and Tversky too made some of their most important discoveries, I believe (e.g., loss aversion?), by a combination of introspection and reasoning.

I don’t think there’s much good to be said about using this “rhetorical experiments” approach for the effort to establish ESP as a real phenomenon, which Bem intended to be the capstone to his career (his work didn’t establish ESP, but ironically did help spark the reforms that are addressing the reproducibility crisis). I detest p-hacking, HARKing, and data fudging – I continue to be involved (e.g. 1, 2), in several initiatives to combat these phenomena, because I know they have yielded more than one patch of empirical ground, good solid stone on which to build a theory, but that has subsequently turned into a cenote; a deep sinkhole. The cavalier attitude toward methodological rigor implied in Bem’s comments is what gets us into a reproducibility crisis.

Still, propounding a theory on the basis of shoddy evidence has a glorious history in science. Don’t forget it. I’m not sure I want us to lose this data-poor, declamatory tradition. There’s value in getting ahead of the data, even when you don’t have the resources or the skills to collect the data that could falsify your theory. If we can create appropriate space to publish that sort of stuff without the author having to pretend that they have impeccable data, perhaps the pressure to cook the books will lessen.