What now? Some lessons from the APA take-downs

The APA’s take-down notices have reminded us that our published articles are owned by them.

While the APA has claimed that the initiative was simply to “to preserve the scientific integrity of the research we publish and provide a secure web environment to access the content”, the APA’s $10 million a year of subscription income might have more to do with it. Indeed, the APA may be reliant on this income. If so, the APA’s interests are in conflict with the interest of scientists, clinicians, and research funders. A top priority of these groups is maximizing the dissemination of knowledge.

By dissemination of knowledge, I don’t just mean individuals being able to read articles after downloading a PDF. To allow improvements to scholarly infrastructure,  including a future of automated error checking, fact mining, and meta-analysis, the authoritative version of scientific articles should not be locked behind paywalls.

On this front, let’s give APA a bit of credit – they have investigated a transition away from subscription journals. The APA has started a fully open access journal (which has waived the APC fees for the first year), and they do allow full open access for a fee in all of their journals. However, the fee is relatively high at $4,000. To enable sustainable open access, we need the cost to be lower. If $4,000 is an indication of APA’s costs, they are not where we should be putting our hopes for the future.

What should researchers do?

In a policy that is more liberal than that of many publishers, the APA allows posting author-formatted manuscripts that contain all the revisions made during the review process.

Posting author-formatted manuscripts is not the final solution to anything, but it can speed progress towards a solution. I refer to posting manuscripts to database-indexed repositories such as university repositories or PsyArxiv.org (disclosure: I am an [unpaid] member of PsyArxiv’s Steering Committee). In contrast, posting to private entities such as ResearchGate and Academia.edu may not be a good idea: they cannot be trusted to keep things completely open – like SSRN, they may be bought by Elsevier or start locking things down to monetize their content.

How does posting our manuscripts advance a long-term solution? First, as more and more researchers habitually post their manuscripts, more universities become comfortable cancelling their journal subscriptions, forcing publishers to move towards other models.

Second, the repositories that researchers post their articles are themselves likely to become an integral part of the publishing future. The emerging overlay journals, for example, are simply webpages curated by editorial boards that link to articles in repositories. The editorial board solicits peer review of submitted articles (which needn’t be uploaded beyond being already in the repository as a preprint), which the authors then revise based on the reviews and once the editor is happy, the revised version – still hosted by the repository – is “published” on the journal webpage. The Center for Open Science is currently working on creating a peer review module for OSF, PsyArxiv, and their other repositories to facilitate this.

Overlay journals are a viable solution to the low-cost open access publishing problem, and use of Open Journal Systems as the editorial submission and peer review management system is another. OJS is already used by thousands of journals, at low cost. However, low cost does not mean zero cost. The costs, both in hours of labor and in technology, are substantial under any model. If the money won’t be coming from subscriptions, where will it come from?

Charging fees directly to authors or their funders has worked for many open access journals, but this is not a comprehensive solution, as many authors do not have funding. This is one reason that in our Fair Open Access principles, we stipulate that authors should not be charged.

Universities and research funders should come together to pool resources to support scholarly communication infrastructure. This is already happening in certain initiatives such as the Open Library of the Humanities. More than 200 universities are members of OLH and provide funds to support the 14 journals they publish. Importantly, for OLH journals the publisher (Ubiquity) is a service provider. They do not own the journal.

Authors and editors can organize editorial boards to resign from publisher-owned journals and join an existing open access journal or create another, as has already happened many times. We provide some information resources for this at PsyOA. Just this year, the European Society for Cognitive Psychology abandoned their corporate subscription-based publisher and started Journal of Cognition, which uses Ubiquity and charges relatively low APCs.

Keep the conversation sparked by APA going and let’s create a fully open access and sustainable future.

 

The APA and publishing costs

This is a follow-up to my previous post, which was about the APA issuing take-down notices and how you can post preprints to keep your science open.

In a survey last year asking vision researchers’ priorities for journals, the top responses included:

“open access”, “Full academic or professional society control” ; “Low cost” ; and “transparent financial accounts”.

Notably, APA is one of the few publishers in the area of perception that has not provided a response to the concerns highlighted by the survey results. Most articles they publish are available only by subscription but APA makes select articles fully open access for a $3,000 fee typically charged to the authors or their funders. That is a relatively high fee.

From their annual report we know the APA receives $11 million/year in journal subscription revenue, and $67 million in other licensing revenue but the report does not break down the $17 million in “publication production” costs, so it is difficult to evaluate how they are using the $3,000 open access fees.

Some of us, and many of our funders, would like to see science transition to open access publishing, in which authors do not sign their copyright away. We’d also like to see low or no author fees. Changing existing journals, such as the APA journals, is particularly hard because often the publisher owns the journal, even though the editorial board and authors provide all the content that makes the journal what it is. PsyOA is an initiative staking out the principles that we call Fair Open Access and provides information to editors and scholarly societies interested in moving their journal from subscription basis to open access.  Another part of the solution is to use and support new infrastructure for scholarly communication that is not reliant on subscription publishers, such as PsyArXiv and BioRXiv.  Some efforts are underway to create a peer review module to allow journals to use that infrastructure, which is expected to result in low-cost and modern open access journals.

posterPsyArXivImage

Is the APA trying to take your science down?

Dear Psychologist,

If you have published in an APA (American Psychological Association) journal and posted the article PDF to a website, you may have already received an email from APA lawyers asking you to take that PDF down:

Dear Sir/Madam,

I write on behalf of the American Psychological Association (APA) to bring to your attention the unauthorized posting of final published journal articles to your website. Following the discussion below, a formal DMCA takedown request is included with URLs to the location of these articles.

The APA is likely within their legal rights here, but there is a way to continue making your work freely available to the world. Upload the final accepted version of your article (your final revised Word document, if you wrote your paper in Word) to your website or, better, to the university repository or to another repository such as PsyArXiv (I am on the Steering Committee of PsyArXiv). Your personal website is not the best option because personal websites tend to be transient, not always properly indexed by the likes of Google Scholar, and some publishers don’t allow posting to personal websites but do allow posting to repositories.

The APA policy allowing upload to repositories says that you must add the following note to the version you post:

© 2016 American Psychological Association. This paper is not the copy of record and may not exactly replicate the authoritative document published in the APA journal. Please do not copy or cite without author’s permission. The final article is available, upon publication, at: [ARTICLE DOI]

As the note says, the APA owns the copyright to your paper, not you. Many of us would like to see science transition to open access publishing, in which we do not sign our copyright away. You have probably noticed some success on this front in the domain of starting new journals (e.g., the open-access journal PLOS ONE rapidly became the largest journal in the world). Changing existing journals is harder because often the publisher owns the journal, even though the editorial board and authors provide all the content that makes the journal what it is. PsyOA is an initiative staking out the principles that we call Fair Open Access and provides information to editors and scholarly societies interested in moving their journal from subscription basis to open access.

posterPsyArXivImage

 

The venerable history of “rhetorical experiments”

Daryl Bem was already rather infamous before he provided, just this week, this excellent quote:

If you looked at all my past experiments, they were always rhetorical devices. I gathered data to show how my point would be made. I used data as a point of persuasion, and I never really worried about, ‘Will this replicate or will this not?’

The quote, from this piece on the history of the reproducibility crisis, has been posted and reposted, sometimes with an expression of anger, sometimes with a sad virtual head shake. The derision is well-deserved in the context of Bem’s final experiments, which attempted to show that ESP exists. But let’s examine what Bem was actually referring to – his earlier career as a social psychologist, a career in which he developed some influential theoretical ideas.

One could argue that Bem’s technique was no less scientific than Galileo’s. Yes, that Galileo, one of the first to use and to champion the experimental method. The following passage is from The Scientific Method in Galileo and Bacon:

Screen Shot 2017-05-20 at 20.03.59.png

The method described by Bem, then, is simply Galileo’s scientific method. Admittedly, Galileo was working at the beginning of the history of mechanics, meaning that there was much low-hanging fruit to be picked by generalizing from a few observations and theoretical insights. Bem was working nearly four hundred years later. And yet, much of Bem’s career is not far from the beginning of the history of the field of social psychology. Bem’s theory of attitude change was published less than two decades after Festinger first advanced the cognitive dissonance theory Bem apparently was reacting against.

I know next to nothing about Bem’s work, but I wouldn’t be surprised if he did gain good insights from intuition and theory, and was quite certain of the value of those insights entirely on that basis, and thus the data was indeed just an afterthought. Kahneman and Tversky too made some of their most important discoveries, I believe (e.g., loss aversion?), by a combination of introspection and reasoning.

I don’t think there’s much good to be said about using this “rhetorical experiments” approach for the effort to establish ESP as a real phenomenon, which Bem intended to be the capstone to his career (his work didn’t establish ESP, but ironically did help spark the reforms that are addressing the reproducibility crisis). I detest p-hacking, HARKing, and data fudging – I continue to be involved (e.g. 1, 2), in several initiatives to combat these phenomena, because I know they have yielded more than one patch of empirical ground, good solid stone on which to build a theory, but that has subsequently turned into a cenote; a deep sinkhole. The cavalier attitude toward methodological rigor implied in Bem’s comments is what gets us into a reproducibility crisis.

Still, propounding a theory on the basis of shoddy evidence has a glorious history in science. Don’t forget it. I’m not sure I want us to lose this data-poor, declamatory tradition. There’s value in getting ahead of the data, even when you don’t have the resources or the skills to collect the data that could falsify your theory. If we can create appropriate space to publish that sort of stuff without the author having to pretend that they have impeccable data, perhaps the pressure to cook the books will lessen.

The emerging future of journal publishing and perception preprints

[a message sent to the vision researcher community of CVnet and visionlist]

Our community and our journals should become more aware of the increasing importance of preprints, and in some cases our journals and our community need to act and change policy.

Preprints are manuscripts posted on the internet openly (ideally, to a preprint service or institutional repository), often prior to being submitted to a journal. Niko Kriegeskorte and others have previously described some benefits of posting preprints at CVnet/visionlist and here. JoV editors have expressed sympathy with posting preprints and talked about ARVO changing its policy, but unfortunately JoV currently still has a page in its submission guidelines prohibiting “double publication” that rules out posting a preprint. Springer (AP&P; CRPI), Elsevier, the APA, Brill (Multisensory Research), and Sage (Perception, i-Perception), in contrast, allow preprint posting.

Preprint sharing in the biological sciences has been growing at a rapid rate and psychology is also growing, in part due to PsyArXiv, which launched late last year (I am on its Steering Committee). PsyArXiv currently hosts about 500 preprints. Its initial taxonomy did not include a separate category for perception, but I have been pushing for that in the hopes that people can eventually subscribe to category updates to help them stay abreast of the newest developments in perception. Later I will circulate a request for feedback regarding what categories and subcategories people would like to see added (e.g. visual perception, auditory perception, tactile perception).

Preprint sharing was born free and is a longstanding practice (in fact, circulating preprints was an early use of the internet), but in the last few years traditional corporate publishers have moved to grab land in an attempt to monetize preprints and the resulting scholarly infrastructure and journals that will be building on preprint servers. The non-profit Center for Open Science that built PsyArXiv and its sister site OSF.io is working on creating extensions for PsyArXiv and other preprint servers, such as peer review, to allow the creation of low-cost open-access journals that receive submissions directly from preprint servers. The preprint server will host the final article as well as the preprint, dressed up with a journal page window onto it, a bit like existing overlay journals.

We can only be assured that publication practices and policies are compatible with these and other developments if journals are owned by scholarly societies, libraries, grant funders, or universities, NOT corporate publishers. To prevent the lock-in that has contributed to sky-high subscription prices and slowed the shift to open access, publishers should be contracted as a service provider to the scholarly community rather than owning our journals. Large research funders have recognized this and to reduce our reliance on publishers who own journals, the Wellcome Trust, Max Planck, and HHMI (eLife), and the Gates Foundation have over the last few years created their own open access journals.

We recently created an information resource, PsyOA.org, to assist journal editors and scholarly societies in understanding what needs to be done to flip an existing journal from being publisher-owned to being scholar-owned, open access, and low cost. I’m interested in hearing peoples’ thoughts here. You can also contact me or Tom Wallis directly if you are interested in flipping a journal.

UPDATE: An earlier version erroneously stated that Springer does not allow preprint posting.

Creating a homework doc and its grading guide in one go

I write homework assignments for students. I also need to create a different version of the same document with all the answers and scoring guide for the tutors (teaching assistants). It is irritating to create two different versions of the document by hand. To avoid this, I’ve come up with the following imperfect solution:

  • Write the assignment in .Rmd. One side benefit of this is one can automate adding up the points that each question is worth.
  • Include all the information for grading the homework, such as the correct answers and partial credit answers, in markdown comments: , below each question.
  • Render the .Rmd to PDF with RStudio knitR and send it to the students.
  • Pass the .Rmd through a sequence of sed commands to replace the comment tags with tags for a code block, creating a “gradingGuide.Rmd”.
  • Render gradingGuide.Rmd to PDF or html, and send it to the tutors (teaching assistants).

Any other solutions out there?

Our latest work: on attention, letter processing, memory

The below is what we’ll be presenting at EPC 2017 (the Experimental Psychology Conference of Australasia) near Newcastle, Australia. The topics are attention and letter processing, word processing, and visual working memory.

When do cues work by summoning attention to a target and when do they work by binding to it?

Alex Holcombe & Charles Ludowici

In exogenous cuing experiments, a cue such as a circle flashes at one of several locations, any of which might contain a target soon after. Accuracy is near chance when the cue is presented simultaneously with the target, but improves rapidly for longer lead times between the cue and the target. The curve tracing this out has positive skew, consistent with a rapid (~80 ms, with variability) shift of attention.

We will report evidence that exogenous cues can also facilitate performance by binding to a buffered representation of the target, obviating the need for attention to shift to the location. We presented rapid streams of letters (RSVP) concurrently in multiple locations. A random letter in a single stream was briefly cued by a circle and participants tried to report the cued letter. Analysis of the errors reveals binding, as indicated by 1) participants reporting non-targets that were presented shortly before the cue nearly as often as items after the cue; 2) the distribution of the times of the non-targets reported was mirror-symmetric rather than positively skewed. Our results suggest that more than eight letters were activated and buffered simultaneously before the cue even appears.

Can SFT identify a model’s processing characteristics when faced with reaction time variability?

Charles Ludowici, Chris Donkin, Alex Holcombe

The Systems Factorial Technology (SFT) analysis technique, in conjunction with appropriately-designed behavioural experiments, can reveal the architecture, stopping rule and capacity of information processing systems. Researchers have typically applied SFT to simple decisions with little variability in processing demands across stimuli. How effective is SFT when the stimuli vary in their processing demands from trial-to-trial? For instance, could it be used to investigate how humans process written words? To test SFT’s performance with variable stimuli, we modelled parallel limited-, unlimited- and super-capacity systems using linear ballistic accumulator (LBA) models. The LBA models’ parameters were estimated for individual participants using data from a lexical decision experiment – a task that involved a set of stimuli with highly variable, stimulus-specific response times. We then used these parameters to simulate experiments designed to allow SFT to identify the models’ capacities, architecture and stopping rule. SFT successfully identified system capacity with <600 trials per condition. The probability of correctly identifying the LBA’s architecture and stopping rule increased with the number of trials per condition. However, even with 2000 trials per condition (8000 trials in total), the power of these tests did not exceed .6. SFT appears promising for investigating the processing of stimuli sets with variable processing demands.

Capacity limits for processing concurrent briefly presented words

Kimbra Ransley, Sally Andrews, and Alex Holcombe

Humans have a limited capacity to identify concurrent briefly-presented targets.  Recent experiments using concurrent rapid serial visual presentation (RSVP) of letters have documented that the direction of reading affects which of two horizontally-displaced streams is prioritised.  Here, we investigate whether the same pattern of prioritisation occurs when participants are asked to identify two horizontally displaced words.  Using a stimulus where two words are briefly presented at the same time (not embedded in an RSVP stream), and the location of one of the words is subsequently cued, we do not find evidence of prioritisation in the direction of reading. Instead, we observed a right visual field advantage, that was not affected by whether participants were told which word to report immediately, or after a 200ms delay.  We compare these results with results from an experiment where the two words are embedded in an RSVP stream. These experiments provide insight into the conditions in which hemispheric differences rather than reading-related prioritisation drives visual field differences, and may have implications for our understanding of visual processes that operate when one must identify and remember multiple stimuli, such as when reading.

“Memory compression” in visual working memory depends on explicit awareness of statistical regularities.

William Ngiam, James Brissenden, and Edward Awh

Visual working memory (WM) is a core cognitive ability that predicts broader measures of cognitive ability. Thus, there has been much interest in the factors that can influence WM capacity. Brady, Konkle & Alvarez (2009) argued statistical regularities may enable larger number of items to be maintained in this online memory system. In a WM task that required recall of arrays of colours, they included a patterned condition in which specific colours were more likely to appear together. There was a robust improvement in recall in this condition relative to one without the regularities. However, this is inconsistent with multiple other studies that have found no benefit of exact repetitions of sample displays in similar working memory tasks (e.g., Olson and Jiang, 2004). We replicated the benefit Brady et al. observed in the patterned condition in two separate studies, but we obtained larger samples of subjects and included an explicit test of memory for the repeated colours pairs. Critically, memory compression effects were observed only in the subset of subjects who had perfect explicit recall of the colour pairings at the end of study. This effect may be better understood as an example of paired associate learning.