Is the APA trying to take your science down?

Dear Psychologist,

If you have published in an APA (American Psychological Association) journal and posted the article PDF to a website, you may have already received an email from APA lawyers asking you to take that PDF down:

Dear Sir/Madam,

I write on behalf of the American Psychological Association (APA) to bring to your attention the unauthorized posting of final published journal articles to your website. Following the discussion below, a formal DMCA takedown request is included with URLs to the location of these articles.

The APA is likely within their legal rights here, but there is a way to continue making your work freely available to the world. Upload the final accepted version of your article (your final revised Word document, if you wrote your paper in Word) to your website or, better, to the university repository or to another repository such as PsyArXiv (I am on the Steering Committee of PsyArXiv). Your personal website is not the best option because personal websites tend to be transient, not always properly indexed by the likes of Google Scholar, and some publishers don’t allow posting to personal websites but do allow posting to repositories.

The APA policy allowing upload to repositories says that you must add the following note to the version you post:

© 2016 American Psychological Association. This paper is not the copy of record and may not exactly replicate the authoritative document published in the APA journal. Please do not copy or cite without author’s permission. The final article is available, upon publication, at: [ARTICLE DOI]

As the note says, the APA owns the copyright to your paper, not you. Many of us would like to see science transition to open access publishing, in which we do not sign our copyright away. You have probably noticed some success on this front in the domain of starting new journals (e.g., the open-access journal PLOS ONE rapidly became the largest journal in the world). Changing existing journals is harder because often the publisher owns the journal, even though the editorial board and authors provide all the content that makes the journal what it is. PsyOA is an initiative staking out the principles that we call Fair Open Access and provides information to editors and scholarly societies interested in moving their journal from subscription basis to open access.



The venerable history of “rhetorical experiments”

Daryl Bem was already rather infamous before he provided, just this week, this excellent quote:

If you looked at all my past experiments, they were always rhetorical devices. I gathered data to show how my point would be made. I used data as a point of persuasion, and I never really worried about, ‘Will this replicate or will this not?’

The quote, from this piece on the history of the reproducibility crisis, has been posted and reposted, sometimes with an expression of anger, sometimes with a sad virtual head shake. The derision is well-deserved in the context of Bem’s final experiments, which attempted to show that ESP exists. But let’s examine what Bem was actually referring to – his earlier career as a social psychologist, a career in which he developed some influential theoretical ideas.

One could argue that Bem’s technique was no less scientific than Galileo’s. Yes, that Galileo, one of the first to use and to champion the experimental method. The following passage is from The Scientific Method in Galileo and Bacon:

Screen Shot 2017-05-20 at 20.03.59.png

The method described by Bem, then, is simply Galileo’s scientific method. Admittedly, Galileo was working at the beginning of the history of mechanics, meaning that there was much low-hanging fruit to be picked by generalizing from a few observations and theoretical insights. Bem was working nearly four hundred years later. And yet, much of Bem’s career is not far from the beginning of the history of the field of social psychology. Bem’s theory of attitude change was published less than two decades after Festinger first advanced the cognitive dissonance theory Bem apparently was reacting against.

I know next to nothing about Bem’s work, but I wouldn’t be surprised if he did gain good insights from intuition and theory, and was quite certain of the value of those insights entirely on that basis, and thus the data was indeed just an afterthought. Kahneman and Tversky too made some of their most important discoveries, I believe (e.g., loss aversion?), by a combination of introspection and reasoning.

I don’t think there’s much good to be said about using this “rhetorical experiments” approach for the effort to establish ESP as a real phenomenon, which Bem intended to be the capstone to his career (his work didn’t establish ESP, but ironically did help spark the reforms that are addressing the reproducibility crisis). I detest p-hacking, HARKing, and data fudging – I continue to be involved (e.g. 1, 2), in several initiatives to combat these phenomena, because I know they have yielded more than one patch of empirical ground, good solid stone on which to build a theory, but that has subsequently turned into a cenote; a deep sinkhole. The cavalier attitude toward methodological rigor implied in Bem’s comments is what gets us into a reproducibility crisis.

Still, propounding a theory on the basis of shoddy evidence has a glorious history in science. Don’t forget it. I’m not sure I want us to lose this data-poor, declamatory tradition. There’s value in getting ahead of the data, even when you don’t have the resources or the skills to collect the data that could falsify your theory. If we can create appropriate space to publish that sort of stuff without the author having to pretend that they have impeccable data, perhaps the pressure to cook the books will lessen.

The emerging future of journal publishing and perception preprints

[a message sent to the vision researcher community of CVnet and visionlist]

Our community and our journals should become more aware of the increasing importance of preprints, and in some cases our journals and our community need to act and change policy.

Preprints are manuscripts posted on the internet openly (ideally, to a preprint service or institutional repository), often prior to being submitted to a journal. Niko Kriegeskorte and others have previously described some benefits of posting preprints at CVnet/visionlist and here. JoV editors have expressed sympathy with posting preprints and talked about ARVO changing its policy, but unfortunately JoV currently still has a page in its submission guidelines prohibiting “double publication” that rules out posting a preprint. Springer (AP&P; CRPI), Elsevier, the APA, Brill (Multisensory Research), and Sage (Perception, i-Perception), in contrast, allow preprint posting.

Preprint sharing in the biological sciences has been growing at a rapid rate and psychology is also growing, in part due to PsyArXiv, which launched late last year (I am on its Steering Committee). PsyArXiv currently hosts about 500 preprints. Its initial taxonomy did not include a separate category for perception, but I have been pushing for that in the hopes that people can eventually subscribe to category updates to help them stay abreast of the newest developments in perception. Later I will circulate a request for feedback regarding what categories and subcategories people would like to see added (e.g. visual perception, auditory perception, tactile perception).

Preprint sharing was born free and is a longstanding practice (in fact, circulating preprints was an early use of the internet), but in the last few years traditional corporate publishers have moved to grab land in an attempt to monetize preprints and the resulting scholarly infrastructure and journals that will be building on preprint servers. The non-profit Center for Open Science that built PsyArXiv and its sister site is working on creating extensions for PsyArXiv and other preprint servers, such as peer review, to allow the creation of low-cost open-access journals that receive submissions directly from preprint servers. The preprint server will host the final article as well as the preprint, dressed up with a journal page window onto it, a bit like existing overlay journals.

We can only be assured that publication practices and policies are compatible with these and other developments if journals are owned by scholarly societies, libraries, grant funders, or universities, NOT corporate publishers. To prevent the lock-in that has contributed to sky-high subscription prices and slowed the shift to open access, publishers should be contracted as a service provider to the scholarly community rather than owning our journals. Large research funders have recognized this and to reduce our reliance on publishers who own journals, the Wellcome Trust, Max Planck, and HHMI (eLife), and the Gates Foundation have over the last few years created their own open access journals.

We recently created an information resource,, to assist journal editors and scholarly societies in understanding what needs to be done to flip an existing journal from being publisher-owned to being scholar-owned, open access, and low cost. I’m interested in hearing peoples’ thoughts here. You can also contact me or Tom Wallis directly if you are interested in flipping a journal.

UPDATE: An earlier version erroneously stated that Springer does not allow preprint posting.

Creating a homework doc and its grading guide in one go

I write homework assignments for students. I also need to create a different version of the same document with all the answers and scoring guide for the tutors (teaching assistants). It is irritating to create two different versions of the document by hand. To avoid this, I’ve come up with the following imperfect solution:

  • Write the assignment in .Rmd. One side benefit of this is one can automate adding up the points that each question is worth.
  • Include all the information for grading the homework, such as the correct answers and partial credit answers, in markdown comments: , below each question.
  • Render the .Rmd to PDF with RStudio knitR and send it to the students.
  • Pass the .Rmd through a sequence of sed commands to replace the comment tags with tags for a code block, creating a “gradingGuide.Rmd”.
  • Render gradingGuide.Rmd to PDF or html, and send it to the tutors (teaching assistants).

Any other solutions out there?

Our latest work: on attention, letter processing, memory

The below is what we’ll be presenting at EPC 2017 (the Experimental Psychology Conference of Australasia) near Newcastle, Australia. The topics are attention and letter processing, word processing, and visual working memory.

When do cues work by summoning attention to a target and when do they work by binding to it?

Alex Holcombe & Charles Ludowici

In exogenous cuing experiments, a cue such as a circle flashes at one of several locations, any of which might contain a target soon after. Accuracy is near chance when the cue is presented simultaneously with the target, but improves rapidly for longer lead times between the cue and the target. The curve tracing this out has positive skew, consistent with a rapid (~80 ms, with variability) shift of attention.

We will report evidence that exogenous cues can also facilitate performance by binding to a buffered representation of the target, obviating the need for attention to shift to the location. We presented rapid streams of letters (RSVP) concurrently in multiple locations. A random letter in a single stream was briefly cued by a circle and participants tried to report the cued letter. Analysis of the errors reveals binding, as indicated by 1) participants reporting non-targets that were presented shortly before the cue nearly as often as items after the cue; 2) the distribution of the times of the non-targets reported was mirror-symmetric rather than positively skewed. Our results suggest that more than eight letters were activated and buffered simultaneously before the cue even appears.

Can SFT identify a model’s processing characteristics when faced with reaction time variability?

Charles Ludowici, Chris Donkin, Alex Holcombe

The Systems Factorial Technology (SFT) analysis technique, in conjunction with appropriately-designed behavioural experiments, can reveal the architecture, stopping rule and capacity of information processing systems. Researchers have typically applied SFT to simple decisions with little variability in processing demands across stimuli. How effective is SFT when the stimuli vary in their processing demands from trial-to-trial? For instance, could it be used to investigate how humans process written words? To test SFT’s performance with variable stimuli, we modelled parallel limited-, unlimited- and super-capacity systems using linear ballistic accumulator (LBA) models. The LBA models’ parameters were estimated for individual participants using data from a lexical decision experiment – a task that involved a set of stimuli with highly variable, stimulus-specific response times. We then used these parameters to simulate experiments designed to allow SFT to identify the models’ capacities, architecture and stopping rule. SFT successfully identified system capacity with <600 trials per condition. The probability of correctly identifying the LBA’s architecture and stopping rule increased with the number of trials per condition. However, even with 2000 trials per condition (8000 trials in total), the power of these tests did not exceed .6. SFT appears promising for investigating the processing of stimuli sets with variable processing demands.

Capacity limits for processing concurrent briefly presented words

Kimbra Ransley, Sally Andrews, and Alex Holcombe

Humans have a limited capacity to identify concurrent briefly-presented targets.  Recent experiments using concurrent rapid serial visual presentation (RSVP) of letters have documented that the direction of reading affects which of two horizontally-displaced streams is prioritised.  Here, we investigate whether the same pattern of prioritisation occurs when participants are asked to identify two horizontally displaced words.  Using a stimulus where two words are briefly presented at the same time (not embedded in an RSVP stream), and the location of one of the words is subsequently cued, we do not find evidence of prioritisation in the direction of reading. Instead, we observed a right visual field advantage, that was not affected by whether participants were told which word to report immediately, or after a 200ms delay.  We compare these results with results from an experiment where the two words are embedded in an RSVP stream. These experiments provide insight into the conditions in which hemispheric differences rather than reading-related prioritisation drives visual field differences, and may have implications for our understanding of visual processes that operate when one must identify and remember multiple stimuli, such as when reading.

“Memory compression” in visual working memory depends on explicit awareness of statistical regularities.

William Ngiam, James Brissenden, and Edward Awh

Visual working memory (WM) is a core cognitive ability that predicts broader measures of cognitive ability. Thus, there has been much interest in the factors that can influence WM capacity. Brady, Konkle & Alvarez (2009) argued statistical regularities may enable larger number of items to be maintained in this online memory system. In a WM task that required recall of arrays of colours, they included a patterned condition in which specific colours were more likely to appear together. There was a robust improvement in recall in this condition relative to one without the regularities. However, this is inconsistent with multiple other studies that have found no benefit of exact repetitions of sample displays in similar working memory tasks (e.g., Olson and Jiang, 2004). We replicated the benefit Brady et al. observed in the patterned condition in two separate studies, but we obtained larger samples of subjects and included an explicit test of memory for the repeated colours pairs. Critically, memory compression effects were observed only in the subset of subjects who had perfect explicit recall of the colour pairings at the end of study. This effect may be better understood as an example of paired associate learning.

An open access fail

In this post I dissect the response by the editors of Cognition to a mass appeal for open access by the researcher community. I hope that my rather critical comments will improve understanding of the issues and help the community achieve better outcomes in the future.

Cognition is a scientific journal published by Elsevier that was traditionally available only by subscription. Some years ago, like most other Elsevier journals, Cognition became a “hybrid” journal: authors can make their particular paper open access, for a fee termed an APC or author processing charge. In the case of Cognition the APC is very high – $2150. And as to the subscription fees, most universities subscribe to Cognition as part of a larger “big deal” package, the very high fees for which help give Elsevier their operating profit of over 30% – well exceeding that of BMW, Google, and Apple, and off the public taxpayer’s tit, not by developing new products or services.

Lingua was a prestigious linguistics journal published by Elsevier and in much the same situation as Cognition. Its editors, including the editor-in-chief, Johan Rooryck, told Elsevier that they’d like to transition to what they called “Fair Open Access” – making the journal open access rather than requiring an expensive subscription to read the journal, with an APC fee of 400 Euros or less, CC-BY licensing of articles with copyright remaining with the authors, and full editorial control of the journal.

Fair Open Access is how journals really should be set up, with the publisher in the role of a service provider, not in the role of owner of research articles (the content of which are typically almost entirely funded by university or government funds). When Elsevier refused to agree to this model, Rooryck and the other editors walked. They started a new journal, Glossa, published by the non-profit Open Library of the Humanities (OLH). The OLH model easily exceeds the ambition of Fair Open Access – thanks to monetary contributions by over one hundred university libraries, authors are charged no APC feeGlossa is free to publish in (although authors with open-access funds available to them are asked to optionally contribute 400 Euros) and its content is free to read and re-use. Thanks to Rooryck’s leadership and no doubt the community rallying together, all  of the editors and the editorial board and many or all of the authors moved over to Glossa, bringing their prestige along with them.

Wouldn’t it be great if other scientific journals followed suit? David Barner (UCSD) and Jesse Snedeker (Harvard) of the editorial board of Cognition thought so. They appealed to the main Cognition editors to investigate the possibility of Fair Open Access. And they started a petition, which was signed by more than 1500 members of the Cognition community, including many famous (such as Noam Chomsky, Nancy Kanwisher, and Liz Spelke) as well as not famous researchers(like me) who publish in Cognition.

The response by the editors of Cognition appeared as an editorial in the journal. The editors say that in response to Barner and Snedeker’s appeal and the associated petition, they polled the editorial board about their opinions on “their satisfaction with the journal and their attitudes about the journal’s role in the open dissemination of science” and got a response rate of 60%. Sixty percent? This looks like a failure of leadership by the editors. Here he is asking critically important questions about the nature and future of the journal, and he gets responses from not much more than half of the editorial board. Presumably every member of the editorial board is doing serious work for the journal – editing the occasional manuscript, in response to the editors. If not, those editorial board members should be asked to resign. So there’s essentially a 100% (eventual) response rate to editing requests, which is a lot more work than answering questions about satisfaction with the journal and their attitudes about the journal’s role in the open dissemination of science. Of course, I do not know how much of this is a failure of leadership by the editor in chief, real recalcitrance by the editorial board, or an intentionally weak effort by an editor in chief who doesn’t want to change anything.

The editorial continues:

While the editorial board expressed a range of opinions, most members were happy with the journal’s relationship with Elsevier.

I’d expect well-informed and public-minded, or even just university-minded, scholars to be less than happy. I’d expect them to resent how much money Elsevier sucks out of our universities as corporate profit, and to resent Elsevier’s ownership of the copyright to the research. Still, being “happy with the relationship” is an ambiguous statement; the editorial board members might still strongly support some of the planks of Fair Open Access.

After a list of the services that Elsevier provides (with no indication that those services couldn’t be provided by OLH or others), the editorial continues:

The poll also indicated striking consensus on the open access issue: The editorial board was happy with the journal’s mixed approach to dissemination, but it felt strongly that open access fees are too high. They felt that a substantial reduction in open access fees would make the option more attractive to authors, with the effect of increasing access around the world to scientific work published in the journal, work that is frequently publicly funded.

By now one can infer that the editors have already given up on (or never tried for) four out of five of the Fair Open Access points. More about that in my next post. But here, we do have a strong statement in support of the request for reasonable APCs.

the editors at Cognition approached Elsevier with a request to lower open access fees. A process of negotiation ensued with the result that Elsevier will start a fund to defray open-access costs for those authors with limited means of support.

OK, negotiation and compromise was to be expected. But what exactly is this fund? The editorial continues:


Authors whose articles are accepted after 1st May, 2016 can apply by requesting a form from the editorial office:

Decisions to grant discounts are at the discretion of the Editor-in-Chief, in consultation with the Publisher.

Accepted authors always have the choice to publish their article as a subscription article at no cost (even after requesting an APC fee reduction), and the subscription option includes Green Open Access Cognition has an embargo period of 12 months.

APC discounts must be requested within one week of acceptance, and will have no impact on the decision made by an editor whether or not to accept the associated paper.


What does this amount to? There’s no information about how large the discounts will be or how many are available. Elsevier will continue to charge an outrageous $2150 APC fee to most, with a completely unknown discount for some. I have it on good authority that the actual cost (not counting the contributions from university libraries that bring the author cost to zero) to publish an article open access for OLH is much less than $1000.

Is this discount of variable amount and unknown total extent a decent outcome of the editors’ (ostensible) attempt to fight for scholars’ interests? Let’s set aside the point that only one of the original five requests was put to Elsevier. Even with the one remaining, there is no information provided about the value of the limited concession they got.

As a signatory to the Fair Open Access petition and a researcher who’s published in Cognition, I’m very upset by both the outcome and the process. I’d expect a large minority or significant majority of the other 1,650 other signatories to also be unhappy.

Barner, Levy, and Snedeker have described their reaction. Yes, they too are unhappy. They consider the “reasonable compromise” with Elsevier (in the words of the editorial) to be not only unreasonable but also unethical:

While paying APCs to Elsevier might make individual articles publicly available, this is neither necessary, since there exist FREE ways to accomplish this same goal (see below), nor ethical, because it spends even more taxpayer dollars without significantly affecting the global problem of access.

To top it all off,  as if to say who’s really the boss, the editors’ editorial is not legally their own. As it says at the bottom of the article, it is “copyright Elsevier B.V., 2016”.

In my next post, I’ll try to consider what we should learn from all this, with a view towards new efforts. If you want to jump to a specific new effort, see the end-run action around Elsevier being promoted by Barner, Levy, and Snedeker.


#academicNoir memes

When #AcademicNoir trended on Twitter, I had fun making a few memes about science publishing and the Registered Replication Reports that we started at Perspectives on Psychological Science.

Screen Shot 2016-09-05 at 14.09.02.png

“Who’s shaking you down?” I asked.

“Elsevier,” whispered the librarian.

I showed her the door.  I still have to work in this town. 


Any one of you go it alone, he’ll say you messed up. But if we first get him to approve the protocol, and then all run the replication together…

Screen Shot 2016-09-05 at 14.10.45.png

Well, Mr. P. Hacker, your luck is about to run out.

Down at the journals, they’re running a new game. They call it ‘preregistration’.

“How about we just delete these two data points?”

She looked shocked.

“Listen- you want to get this published, or not?”

They called him “Big Pharma”. Really shady character. Really knew how to make a data set go missing.

We know you’re in there. We’ve got your lab surrounded.

OK, I’ll come out.

Don’t move! Just email us the data. The *raw* data.

He had a good run, for a while. But then he got an email from . Asking for the raw data. His time was up.

“But you haven’t even seen my numbers yet!” she said.

“Just give Dr. Hacker a little time alone with them,” I told her.