Planning for PlanS

Several European funders announced planS, which suggests that several European nations will ban grantees form publishing in paywalled journals (including hybrid journals that allow one to make an article open access for a fee), and that includes many society journals that are owned by societies but published by large subscription-based publishers. This is to begin in 2020.

Many open access journals charge an APC or article processing charge. Then, open access increases dissemination and readership but can shut out poor authors. This issue is one reason I favor the overlay journal model (here is an example), which can operate at very low cost.  For overlay journals, the manuscript uploading and hosting issues are off-loaded to servers such as PsyArxiv. Management of the peer review flow can be handled for free by university-hosted OJS software (I think; I’m not aware of an overlay journal being managed by OJS – anyone know of any?), or with an external professional service such as Scholastica, which charges $10/article (here is Scholastica’s entry in our guide to low-cost publishers). At a low cost like that, sufficient funds should be available from various sources, such that poor authors don’t have to pay anything.

Unfortunately, many societies have become dependent on money that comes from restricting dissemination of its members’ research. One can anticipate a lot of resistance from these societies to open access for that reason. AAAS, which publishes Science, has already resisted.  For societies that are less conservative (such as the Association for Psychological Science, which I am an associate editor for), how should they be lobbied? What realistic goal should we be pushing for? I’m still thinking through the path(s) that should be taken.

PlanS has yet to be fully spelled out. It is possible (and hoped by many) that researchers will be able to satisfy the mandate via green OA (uploading their manuscript to a server such as PsyArxiv, or their institutional repository) rather than prohibition on publishing in a certain type of journal.  The money that was previously going to fat subscription publishers, sometimes in the form of APCs, would still be cut off, and the alternate publishing infrastructure associated with green OA would be boosted. This would hasten a transition away from dependence on journals for dissemination, and thereby lower the cost that journals can charge, as a subscription or as an APC. We can anticipate that peer review facilities, both pre- and post-publication, will become more and more available for “preprint” servers, unleashing lower costs as well as new innovation in what a journal is.

FYI, PsyOA.org is a resource (together with LingOA and MathOA) that we created to help journals flip to open access. And Publishing Reform is an open discussion forum for discussion of this and many other issues.

Advertisements

Posting manuscripts online

The below, a letter to my department, is a brief primer for researchers on pre/postprint posting, with some comments on the evolving scholarly publication landscape.


Dear colleagues,

In last week’s School Research Committee meeting, the topic of publishing open access came up, and our Head of School asked me to send you an email about one solution in particular. 

In the brains of many academics, the phrase “open access” rapidly activates thoughts about the publication fees (or APC, Article Processing Charge) that large open-access journals such as PLoS ONE ($1595USD) and Nature Communications ($5200USD) charge. Major funders such as the ARC and NHMRC do allow spending of grant money on these fees, but even if your lab has the loot, you might be disinclined to splash out for journal APCs rather than, say, a bit more salary for lab personnel.

Fortunately, in most cases you need not pay a fee to make your research freely accessible. Typically you can post your manuscript to an open web repository such as PsyArxiv.org or BioRxiv.org, or to the University’s repository. This is always in compliance with copyright when done before submitting the manuscript to a journal. Most journals also allow posting the manuscript after submitting it to the journal, and even allow posting the manuscript that incorporates all the reviewers’ comments after acceptance by the journal. At the SHERPA/ROMEO site, you can look up the policy of nearly every journal. The policy typically includes a requirement to post a link to the official published version of the article.

Posting to a repository not only makes your particular articles freely available, it also hastens the growth of open-access infrastructure, reducing the world’s reliance on access to sometimes-expensive journals. Posting on one’s personal website is not as effective. As the NHMRC open access policy explains, your own site or sites like ResearchGate and Academia.edu “are not acceptable repositories.. as they may not provide the appropriate support for long-term storage, curation and/or fulfilment of publisher copyright requirements.” 

While few in our field posted to repositories a decade ago, repositories have seen  rapid, near-exponential growth over the last several years, and as a consequence some of us now frequently find relevant research way ahead of seeing it in a journals (this also results in faster accumulation of citations). 

As disillusionment with the expense and exclusivity of subscription journals grows, various institutions have refused to continue to increase the amounts they pay for subscription journals. Consortia of both German and Swedish universities have sought to include open access publishing for their researchers in new contracts with publishers like Elsevier, and have now canceled the contracts and thus lost access to Elsevier journals when the publisher would not agree to terms.

Posting manuscripts in repositories helps these researchers continue to access new research. I have more on these topics here

Finally, if you happen to be an editor of a subscription journal, or officer of a society associated with one, you might be interested in PsyOA, an organisation that has laid out what we call the Fair Open Access principles, and provides information and resources for those interested in moving a journal from a subscription basis to open access.

-Alex

Anne Treisman and feature integration theory

Perhaps the rumors of Anne Treisman’s passing are greatly exaggerated. I hope they are [UPDATE: I’ve gotten confirmation they sadly aren’t]. Regardless, in this era of lists of most-influential psychologists that do not include her, it is a good time to reflect on her influence.

Anne Treisman studied during what she described as “the cusp of the cognitive revolution”. Her tutor (instructor leading her very small classes at Cambridge) was Richard Gregory, who was probably one of the greatest educators of all time in the field of perception, as well as an excellent researcher. Gregory, I imagine, would have embraced the cognitive approach to understanding the mind as a refreshing alternative to behaviorism, that ran so contrary to the tradition of how visual perception was understood. During her PhD studies, Treisman was influenced by Donald Broadbent’s book that described a filter model of selective attention.

Two decades after completing her PhD, Anne Treisman proposed the theory that was and is, by a wide margin I believe, the most influential theory of attention. I still struggle with its implications today. Just yesterday I submitted a conference abstract (pasted below) whose first sentence quotes Treisman’s 1980 paper on this “feature integration theory of attention”.

It is just astounding that such a specific theory (as opposed to a general framework, e.g. Bayesian approaches) has sparked so much interesting research while still remaining a live question itself and seeming to resist simple confirmation or disconfirmation. It eventually brought what is now known as “the binding problem” to the forefront of neuroscience, after more than a decade of work in the psychology of visual perception and visual cognition.

To understand the issues surrounding Treisman’s specific claim that visual attention binds features requires, I think, a richer view of what vision does than any of us may yet possess. I have been struggling with it myself for over twenty years.


The remarkable independence of visual features… delimited

Alex Holcombe, Xiaoqi Xu, & Kim Ransley

Visual features could “float free” if no binding process intervened between sensation and conscious perception (Treisman, 1980). If instead a binding process precedes conscious perception, it should introduce dependencies among the featural errors that one makes. For example, when multiple objects are presented close in space or in time, an erroneous report of one feature from a neighboring object should more often than chance be associated with a report of the other feature from that neighboring object. Yet researchers have repeatedly found this not to be true, for features such as color, orientation, and letter identity (Bundesen et al., 2003;  Kyllingsbæk and Bundesen, 2007; Holcombe & Cavanagh, 2008; Vul & Rich, 2010). These remarkable findings of free-floating independence raise difficult questions about when and how feature binding occurs. They have inspired surprising conclusions, such as that features are not bound until they enter memory (Rangelov & Zeki, 2014). In two experiments, we find independence of temporal errors when reporting simultaneous letters from two streams that are far apart, much like the independence observed in the literature for other stimuli. But when the streams were presented very close to each other, a positive correlation was found. Experiment 1 found this for English letters and Experiment 2 for Chinese character radicals tested with readers of Chinese. These findings suggest that, in this case at least, a distance-dependent visual process mediates binding and thus that binding is not post-perceptual. In discussion, a broader view of visual feature binding will be offered.

A partial solution to the problem of predatory journals, and a new index of journal quality

On twitter I floated this partial solution to the problem of predatory journals,

which I’ll add to here.

If you’ve been in a field for a couple years, then you’re familiar with the journals that most of its research is published in. If you came across a journal that was new to you, you’d probably scrutinise its content and its editorial board before publishing in it, and you’d probably notice if something were a bit dodgy about that journal.

But many users of scientific research do not have much familiarity with journals of particular specialties and their mores. Sadly, this includes some of the administrators that make decisions about the careers and grants of scholars. It also includes many in countries without a long tradition of being fully integrated with international scholarship, who are now trying to rapidly join the community of scholars published in English. Journalists, medical professionals, wikipedia authors, and policymakers may not have the experience to distinguish good journals from illegitimate ones.

Unfortunately, there is no one-stop shop that scholars, administrators, journalists, or policymakers can consult for an indication of how legitimate a journal is. Predatory journals are common, charging researchers hundreds to publish an article with little to no vetting by reviewers and shoddy publishing service. Their victims may predominantly come from countries trying to jump into international publishing in English for the first time, some of whom receive monetary rewards from their universities for doing so. There are proprietary journal databases like journal citation reports of Thomson Reuters, but they cost money and can take years to index new journals. Jeffrey Beall used to maintain an (arguably biased) free list of predatory journals, but for various reasons including legal ones (see 1, 2) blacklists are probably a bad idea.

What follows is an automated way to create a list of legitimate journals, in other words a whitelist for people to consult. It can’t be fully automated until the scholarly community institutes a few changes, but these are changes that arguably are also needed for other reasons.

Non-predatory, respected journals nearly universally have an editorial board of scholars who have published a significant amount of research in other respected journals. The whitelist would need to establish whether those scholars exist and have published in (other) reputable journals.

Journals have rapidly taken up the ORCiD system of unique researcher identities, asking authors who submit papers to enter their ORCid number. They should also do this for their editorial board members – journals should add ORCiD numbers to their editorial board list.

An organization (such as SHERPA, that maintains the SHERPA/ROMEO list of journals and their open access policies) could then pull the editors’ publication lists from ORCiD and create a score, with a threshold for the score indicating that a goodly proportion of the editors have published in other reputable journals. To get this started, existing whitelists of legitimate journals would be used to make sure the journals the editorial board members published in were legitimate.

A new index of journal quality

The score could also be used as a new indicator of the esteem of journals – if the journal has only highly-cited researchers on its editorial board, it is probably a prestigious journal (science badly needs new indicators of quality, however flawed, to reduce reliance on citation metrics like impact factor). Journals could thus be ranked by the scholarly impact of its editorial board members. This would allow new journals to immediately have prestige without having to wait the years necessary to establish a strong citation record.

Currently, the reliance on impact factor and its long time lag imposes a high barrier to entry, preventing innovative publishers and journals from competing with older journals. This is also a key obstacle for getting editorial boards to decamp from publisher-owned subscription titles and create a new open access journals because, when new, the journal has no citation metrics.

A remaining difficulty is that some predatory journal webpages list names of researchers on their editorial board who never agreed to be listed. If ORCiD would add a field for researchers’ digital signature public key, and researchers started using digital signatures, then journals could on their webpage (and even in their article PDFs) include a signed message from each editor certifying that they agreed to be on the editorial board.

UPDATE 2 Feb 2018: ORCID has already been in the process of adding a field for editorial affiliation 

 

Psychonomics 2017, our presentations

Was that a shift of attention or binding in a buffer?

Charles J. H. Ludowici; Alex O. Holcombe (presented by Alex)

3:50-4:05 PM Friday 10 November, West Meeting Room 118-120

 

Cueing a stimulus can boost the rate of success reporting it. This is usually thought to reflect a time-consuming attention shift to the stimulus location. Recent results support an additional process of “buffering and binding” – stimulus representations have persisting (buffered) activations and one is bound with the cue. Here, an RSVP stream of letters is presented, with one letter cued by a ring. The presentation time, relative to the cue, of the letters reported are aggregated across trials. The resulting distribution appears time-symmetric and includes more items before the cue than are predicted by guessing. When a central cue is used rather than the peripheral ring, the data no longer favor the symmetric model, suggesting an attention shift rather than buffering and binding. To explore the capacity of buffering in the peripheral cue condition, we vary the number of streams, documenting changes in the temporal dispersion of letters reported and the time of the letter most frequently reported.

 

Negotiating a capacity limit in visual processing: Are words prioritised in the direction of reading?

by Kim Ransley, Sally Andrews, and Alex Holcombe

poster #1208 [revised title] 6-7.30pm Thursday 9 November

Experiments using concurrent rapid serial visual presentation (RSVP) of letters have documented that the direction of reading affects which of two horizontally-displaced streams is prioritised ­— in English, the letters of the left stream are better reported but this is not the case in Arabic.  Here, we present experiments investigating whether this left bias occurs when the stimuli are concurrently presented English words.  The first experiment revealed a right bias for reporting one of two simultaneously and briefly-presented words (not embedded in an RSVP stream), when the location of one of the words was subsequently cued.  An ongoing experiment directly compares spatial biases in dual RSVP of letters with those in dual RSVP of words in the same participants.   These findings have implications for understanding the relative roles of hemispheric lateralisation for language, and attentional deployment during reading. UPDATE: THE RSVP EXPERIMENTS REPLICATE THE DIFFERENCE BETWEEN LETTERS AND WORDS BUT GO ON TO SHOW THE SAME BIAS (UPPER VISUAL FIELD) WHEN THE STIMULI ARE ARRAYED VERTICALLY RATHER THAN HORIZONTALLY, CONSISTENT WITH READING ORDER. Come by to hear our exciting conclusion!

Can Systems Factorial Technology Identify Whether Words are Processed
in Parallel or Serially?

Charles J. H. Ludowici; Alex O. Holcombe

6:00-7:30 PM Friday 10 November poster session

 

To determine the capacity, architecture (serial or parallel), and stopping rule of human processing of stimuli, researchers increasingly use Systems Factorial Technology (SFT) analysis techniques. The associated experiments typically use a small set of stimuli that vary little in their processing demands. However, many researchers are interested in how humans process kinds of stimuli that vary in processing demands, such as written words. To assess SFT’s performance with such stimuli, we tested its ability to identify processing characteristics from simulated response times derived from parallel limited-, unlimited- and super-capacity linear ballistic accumulator (LBA) models, which mimicked human response time patterns from a lexical decision task. SFT successfully identified system capacity with <600 trials per condition. However, for identifying architecture and stopping rule, even with 2000 trials per condition, the power of these tests did not exceed .6. To our knowledge, this is the first test of SFT’s ability to identify the characteristics of systems that generate RT variability similar to that found in human experiments using heterogeneous stimuli. The technique also constitutes a novel form of power analysis for SFT.

Ethics and IRB burden

Hoisted from the comments on Scott Alexander’s ethics/IRB nightmare, an insight I hadn’t seen before:
Most research admins are willing to admit the “winging it” factor among themselves. For obvious reasons, however, you want the faculty and/or researchers with whom you interact to respect your professional judgment…
So of course you’re not going to confess that you don’t really have a clue what you’re doing; you’re just puzzling over these regulations like so many tea leaves and trying to make a reasonable judgment based on your status as a reasonably well-educated and fair-minded human being.
 
What this means in practice is almost zero uniformity in the field. Your IRB from hell story wasn’t even remotely shocking to me. Other commenters’ IRB from just-fine-ville stories are also far from shocking. Since so few people really understand what the regulations mean or how to interpret them, let alone how to protect against government bogeymen yelling at you failing to follow them, there is a wild profusion of institutional approaches to research administration, and this includes huge variations in concern for the more fine-grained regulatory details. It is really hard to find someone to lead a grants or research administration office who has expertise in all the varied fields of compliance now required. It’s hard to find someone with the expertise in any of the particular fields, to be honest.
 
And, to bring home again the absurdity:
 
Nobody expects any harm from asking your co-worker “How are you this morning?” in conversation. But if I were to turn this into a study – “Diurnal Variability In Well-Being Among Office Workers” – I would need to hire a whole team of managers just to get through the risk paperwork and the consent paperwork and the weekly reports and the team meetings. I can give a patient twice the standard dose of a dangerous medication without justifying myself to anyone. I can confine a patient involuntarily for weeks and face only the most perfunctory legal oversight. But if I want to ask them “How are you this morning?” and make a study out of it, I need to block off my calendar for the next ten years to do the relevant paperwork.

A major math journal flips to Fair Open Access

Akihiro Munemasa, Christos Athanasiadis, Hugh Thomas, and Hendrik van Maldeghem share the chief editor role at a journal that’s like many others across mathematics and the sciences. The Journal of Algebraic Combinatorics is a subscription journal published by one of the big, highly-profitable publishers (Springer Nature). But they haven’t been happy with the fees Springer charges for people to read their articles.

At the end of the year, all four will resign, as will nearly everyone on the editorial board. They’re starting a new, open access, free-to-authors journal. The new journal is called Algebraic Combinatorics and will follow Fair Open Access principles. The model for this flip is the precedent of journals like Lingua, where after the editors and editorial board abandoned ship, the community of researchers followed, withdrawing many of their submitted manuscripts from the old journal and submitting them and their new manuscripts to the new journal, Glossa. The reason this happens is that the real value in a high-quality journal like the Journal of Algebraic Combinatorics and (formerly) Lingua  does not come from the journal’s publisher but rather the scholars who send the journal their work, review others’ work, and serve as editors.

The Centre Mersenne will provide publishing services, and the organisation MathOA has helped with the transition. MathOA is a sister organisation to PsyOA, which I am chair of. We hope that the information resources we’ve created at PsyOA, MathOA, and the umbrella site FairOA will help many more communities of scholars to switch to Fair Open Access.

One of the obstacles to flipping is fear of the unknown. A specific fear is that other journal management systems (JMS) might not be as full-featured or easy to use as the JMS provided by one’s current publisher. To this end, with a few scholars at PKP (creator of Open Journal Systems) and elsewhere, we would like to do a project comparing and contrasting the features and ease of use of different JMSes. This might be a good project for a master’s or PhD student in library sciences. If you have some relevant expertise and have such students, please get in touch.