Anne Treisman and feature integration theory

Perhaps the rumors of Anne Treisman’s passing are greatly exaggerated. I hope they are [UPDATE: I’ve gotten confirmation they sadly aren’t]. Regardless, in this era of lists of most-influential psychologists that do not include her, it is a good time to reflect on her influence.

Anne Treisman studied during what she described as “the cusp of the cognitive revolution”. Her tutor (instructor leading her very small classes at Cambridge) was Richard Gregory, who was probably one of the greatest educators of all time in the field of perception, as well as an excellent researcher. Gregory, I imagine, would have embraced the cognitive approach to understanding the mind as a refreshing alternative to behaviorism, that ran so contrary to the tradition of how visual perception was understood. During her PhD studies, Treisman was influenced by Donald Broadbent’s book that described a filter model of selective attention.

Two decades after completing her PhD, Anne Treisman proposed the theory that was and is, by a wide margin I believe, the most influential theory of attention. I still struggle with its implications today. Just yesterday I submitted a conference abstract (pasted below) whose first sentence quotes Treisman’s 1980 paper on this “feature integration theory of attention”.

It is just astounding that such a specific theory (as opposed to a general framework, e.g. Bayesian approaches) has sparked so much interesting research while still remaining a live question itself and seeming to resist simple confirmation or disconfirmation. It eventually brought what is now known as “the binding problem” to the forefront of neuroscience, after more than a decade of work in the psychology of visual perception and visual cognition.

To understand the issues surrounding Treisman’s specific claim that visual attention binds features requires, I think, a richer view of what vision does than any of us may yet possess. I have been struggling with it myself for over twenty years.


The remarkable independence of visual features… delimited

Alex Holcombe, Xiaoqi Xu, & Kim Ransley

Visual features could “float free” if no binding process intervened between sensation and conscious perception (Treisman, 1980). If instead a binding process precedes conscious perception, it should introduce dependencies among the featural errors that one makes. For example, when multiple objects are presented close in space or in time, an erroneous report of one feature from a neighboring object should more often than chance be associated with a report of the other feature from that neighboring object. Yet researchers have repeatedly found this not to be true, for features such as color, orientation, and letter identity (Bundesen et al., 2003;  Kyllingsbæk and Bundesen, 2007; Holcombe & Cavanagh, 2008; Vul & Rich, 2010). These remarkable findings of free-floating independence raise difficult questions about when and how feature binding occurs. They have inspired surprising conclusions, such as that features are not bound until they enter memory (Rangelov & Zeki, 2014). In two experiments, we find independence of temporal errors when reporting simultaneous letters from two streams that are far apart, much like the independence observed in the literature for other stimuli. But when the streams were presented very close to each other, a positive correlation was found. Experiment 1 found this for English letters and Experiment 2 for Chinese character radicals tested with readers of Chinese. These findings suggest that, in this case at least, a distance-dependent visual process mediates binding and thus that binding is not post-perceptual. In discussion, a broader view of visual feature binding will be offered.

Advertisements

A partial solution to the problem of predatory journals, and a new index of journal quality

On twitter I floated this partial solution to the problem of predatory journals,

which I’ll add to here.

If you’ve been in a field for a couple years, then you’re familiar with the journals that most of its research is published in. If you came across a journal that was new to you, you’d probably scrutinise its content and its editorial board before publishing in it, and you’d probably notice if something were a bit dodgy about that journal.

But many users of scientific research do not have much familiarity with journals of particular specialties and their mores. Sadly, this includes some of the administrators that make decisions about the careers and grants of scholars. It also includes many in countries without a long tradition of being fully integrated with international scholarship, who are now trying to rapidly join the community of scholars published in English. Journalists, medical professionals, wikipedia authors, and policymakers may not have the experience to distinguish good journals from illegitimate ones.

Unfortunately, there is no one-stop shop that scholars, administrators, journalists, or policymakers can consult for an indication of how legitimate a journal is. Predatory journals are common, charging researchers hundreds to publish an article with little to no vetting by reviewers and shoddy publishing service. Their victims may predominantly come from countries trying to jump into international publishing in English for the first time, some of whom receive monetary rewards from their universities for doing so. There are proprietary journal databases like journal citation reports of Thomson Reuters, but they cost money and can take years to index new journals. Jeffrey Beall used to maintain an (arguably biased) free list of predatory journals, but for various reasons including legal ones (see 1, 2) blacklists are probably a bad idea.

What follows is an automated way to create a list of legitimate journals, in other words a whitelist for people to consult. It can’t be fully automated until the scholarly community institutes a few changes, but these are changes that arguably are also needed for other reasons.

Non-predatory, respected journals nearly universally have an editorial board of scholars who have published a significant amount of research in other respected journals. The whitelist would need to establish whether those scholars exist and have published in (other) reputable journals.

Journals have rapidly taken up the ORCiD system of unique researcher identities, asking authors who submit papers to enter their ORCid number. They should also do this for their editorial board members – journals should add ORCiD numbers to their editorial board list.

An organization (such as SHERPA, that maintains the SHERPA/ROMEO list of journals and their open access policies) could then pull the editors’ publication lists from ORCiD and create a score, with a threshold for the score indicating that a goodly proportion of the editors have published in other reputable journals. To get this started, existing whitelists of legitimate journals would be used to make sure the journals the editorial board members published in were legitimate.

A new index of journal quality

The score could also be used as a new indicator of the esteem of journals – if the journal has only highly-cited researchers on its editorial board, it is probably a prestigious journal (science badly needs new indicators of quality, however flawed, to reduce reliance on citation metrics like impact factor). Journals could thus be ranked by the scholarly impact of its editorial board members. This would allow new journals to immediately have prestige without having to wait the years necessary to establish a strong citation record.

Currently, the reliance on impact factor and its long time lag imposes a high barrier to entry, preventing innovative publishers and journals from competing with older journals. This is also a key obstacle for getting editorial boards to decamp from publisher-owned subscription titles and create a new open access journals because, when new, the journal has no citation metrics.

A remaining difficulty is that some predatory journal webpages list names of researchers on their editorial board who never agreed to be listed. If ORCiD would add a field for researchers’ digital signature public key, and researchers started using digital signatures, then journals could on their webpage (and even in their article PDFs) include a signed message from each editor certifying that they agreed to be on the editorial board.

UPDATE 2 Feb 2018: ORCID has already been in the process of adding a field for editorial affiliation 

 

Psychonomics 2017, our presentations

Was that a shift of attention or binding in a buffer?

Charles J. H. Ludowici; Alex O. Holcombe (presented by Alex)

3:50-4:05 PM Friday 10 November, West Meeting Room 118-120

 

Cueing a stimulus can boost the rate of success reporting it. This is usually thought to reflect a time-consuming attention shift to the stimulus location. Recent results support an additional process of “buffering and binding” – stimulus representations have persisting (buffered) activations and one is bound with the cue. Here, an RSVP stream of letters is presented, with one letter cued by a ring. The presentation time, relative to the cue, of the letters reported are aggregated across trials. The resulting distribution appears time-symmetric and includes more items before the cue than are predicted by guessing. When a central cue is used rather than the peripheral ring, the data no longer favor the symmetric model, suggesting an attention shift rather than buffering and binding. To explore the capacity of buffering in the peripheral cue condition, we vary the number of streams, documenting changes in the temporal dispersion of letters reported and the time of the letter most frequently reported.

 

Negotiating a capacity limit in visual processing: Are words prioritised in the direction of reading?

by Kim Ransley, Sally Andrews, and Alex Holcombe

poster #1208 [revised title] 6-7.30pm Thursday 9 November

Experiments using concurrent rapid serial visual presentation (RSVP) of letters have documented that the direction of reading affects which of two horizontally-displaced streams is prioritised ­— in English, the letters of the left stream are better reported but this is not the case in Arabic.  Here, we present experiments investigating whether this left bias occurs when the stimuli are concurrently presented English words.  The first experiment revealed a right bias for reporting one of two simultaneously and briefly-presented words (not embedded in an RSVP stream), when the location of one of the words was subsequently cued.  An ongoing experiment directly compares spatial biases in dual RSVP of letters with those in dual RSVP of words in the same participants.   These findings have implications for understanding the relative roles of hemispheric lateralisation for language, and attentional deployment during reading. UPDATE: THE RSVP EXPERIMENTS REPLICATE THE DIFFERENCE BETWEEN LETTERS AND WORDS BUT GO ON TO SHOW THE SAME BIAS (UPPER VISUAL FIELD) WHEN THE STIMULI ARE ARRAYED VERTICALLY RATHER THAN HORIZONTALLY, CONSISTENT WITH READING ORDER. Come by to hear our exciting conclusion!

Can Systems Factorial Technology Identify Whether Words are Processed
in Parallel or Serially?

Charles J. H. Ludowici; Alex O. Holcombe

6:00-7:30 PM Friday 10 November poster session

 

To determine the capacity, architecture (serial or parallel), and stopping rule of human processing of stimuli, researchers increasingly use Systems Factorial Technology (SFT) analysis techniques. The associated experiments typically use a small set of stimuli that vary little in their processing demands. However, many researchers are interested in how humans process kinds of stimuli that vary in processing demands, such as written words. To assess SFT’s performance with such stimuli, we tested its ability to identify processing characteristics from simulated response times derived from parallel limited-, unlimited- and super-capacity linear ballistic accumulator (LBA) models, which mimicked human response time patterns from a lexical decision task. SFT successfully identified system capacity with <600 trials per condition. However, for identifying architecture and stopping rule, even with 2000 trials per condition, the power of these tests did not exceed .6. To our knowledge, this is the first test of SFT’s ability to identify the characteristics of systems that generate RT variability similar to that found in human experiments using heterogeneous stimuli. The technique also constitutes a novel form of power analysis for SFT.

Ethics and IRB burden

Hoisted from the comments on Scott Alexander’s ethics/IRB nightmare, an insight I hadn’t seen before:
Most research admins are willing to admit the “winging it” factor among themselves. For obvious reasons, however, you want the faculty and/or researchers with whom you interact to respect your professional judgment…
So of course you’re not going to confess that you don’t really have a clue what you’re doing; you’re just puzzling over these regulations like so many tea leaves and trying to make a reasonable judgment based on your status as a reasonably well-educated and fair-minded human being.
 
What this means in practice is almost zero uniformity in the field. Your IRB from hell story wasn’t even remotely shocking to me. Other commenters’ IRB from just-fine-ville stories are also far from shocking. Since so few people really understand what the regulations mean or how to interpret them, let alone how to protect against government bogeymen yelling at you failing to follow them, there is a wild profusion of institutional approaches to research administration, and this includes huge variations in concern for the more fine-grained regulatory details. It is really hard to find someone to lead a grants or research administration office who has expertise in all the varied fields of compliance now required. It’s hard to find someone with the expertise in any of the particular fields, to be honest.
 
And, to bring home again the absurdity:
 
Nobody expects any harm from asking your co-worker “How are you this morning?” in conversation. But if I were to turn this into a study – “Diurnal Variability In Well-Being Among Office Workers” – I would need to hire a whole team of managers just to get through the risk paperwork and the consent paperwork and the weekly reports and the team meetings. I can give a patient twice the standard dose of a dangerous medication without justifying myself to anyone. I can confine a patient involuntarily for weeks and face only the most perfunctory legal oversight. But if I want to ask them “How are you this morning?” and make a study out of it, I need to block off my calendar for the next ten years to do the relevant paperwork.

A major math journal flips to Fair Open Access

Akihiro Munemasa, Christos Athanasiadis, Hugh Thomas, and Hendrik van Maldeghem share the chief editor role at a journal that’s like many others across mathematics and the sciences. The Journal of Algebraic Combinatorics is a subscription journal published by one of the big, highly-profitable publishers (Springer Nature). But they haven’t been happy with the fees Springer charges for people to read their articles.

At the end of the year, all four will resign, as will nearly everyone on the editorial board. They’re starting a new, open access, free-to-authors journal. The new journal is called Algebraic Combinatorics and will follow Fair Open Access principles. The model for this flip is the precedent of journals like Lingua, where after the editors and editorial board abandoned ship, the community of researchers followed, withdrawing many of their submitted manuscripts from the old journal and submitting them and their new manuscripts to the new journal, Glossa. The reason this happens is that the real value in a high-quality journal like the Journal of Algebraic Combinatorics and (formerly) Lingua  does not come from the journal’s publisher but rather the scholars who send the journal their work, review others’ work, and serve as editors.

The Centre Mersenne will provide publishing services, and the organisation MathOA has helped with the transition. MathOA is a sister organisation to PsyOA, which I am chair of. We hope that the information resources we’ve created at PsyOA, MathOA, and the umbrella site FairOA will help many more communities of scholars to switch to Fair Open Access.

One of the obstacles to flipping is fear of the unknown. A specific fear is that other journal management systems (JMS) might not be as full-featured or easy to use as the JMS provided by one’s current publisher. To this end, with a few scholars at PKP (creator of Open Journal Systems) and elsewhere, we would like to do a project comparing and contrasting the features and ease of use of different JMSes. This might be a good project for a master’s or PhD student in library sciences. If you have some relevant expertise and have such students, please get in touch.

The Fair Open Access principles

Mark Wilson and I wrote the below piece for the Australian Open Access Support Group. The principles we lay out guide our vision for working to create an open access future governed by the community of scholars, not publishers.


In March 2017 a group of researchers and librarians interested in journal reform formalized the Fair Open Access Principles.

The basic principles are:

  1. The journal has a transparent ownership structure, and is controlled by and responsive to the scholarly community.
  2. Authors of articles in the journal retain copyright.
  3. All articles are published open access and an explicit open access licence is used.
  4. Submission and publication is not conditional in any way on the payment of a fee from the author or its employing institution, or on membership of an institution or society.
  5. Any fees paid on behalf of the journal to publishers are low, transparent, and in proportion to the work carried out.

Detailed clarification and interpretation of the principles is provided at the site.

Here, instead, we put these principles into context and explain the mFAIRoaPrinciplesotivation behind them.

Our basic thesis is that the current situation in which commercial publishers own the title to journals is untenable. Many existing journals were begun by scholars but subsequently acquired by Elsevier, Springer, Wiley, Taylor & Francis and other commercial publishers. These publishers now have a strong incentive to oppose any reform of the journal that would benefit the community of authors, editors and readers but not help the short-term interests of its own shareholders. We have seen several examples of this in recent years the Wikipedia entry for Elsevier, for example, collects many examples of malfeasance.

The evidence is now overwhelming that the interests of large commercial publishers are not well aligned with the interests of the research community or the general public. Thus Principle 1 is key. Changing a journal to open access but allowing it to be bought easily by Elsevier, for example, would be a pointless exercise. We must decouple ownership of journals from publication services. This will allow editorial boards to shop around for publishers, who must compete on price and service quality rather than exploit a monopolistic position. In other words, a functioning market will arise. Also, journals will have more chance to innovate by not being locked into inflexible and outdated infrastructure.

Principle 2 (authors retaining copyright) seems obvious. Large publishers have claimed that having authors assign them copyright to articles protects the authors. We know of no case where this has happened. However, publishers have prevented authors from reusing their own work!

Open access is of course the main goal and thus the associated principle (Principle 3) needs little explanation. Some authors appear to believe that posting occasional preprints/postprints on their own website is as good as true open access. This is not the case – some of the reasons are licence issues, confusion about the version of record, lack of machine readability, inconsistent searchability, and unreliable archiving.

APCs (Article Processing Charges) are a common feature of open access journals and a main source of income, particularly for “predatory” journals whose sole function is to make money for unscrupulous owners. Large commercial publishers have responded to pressure by offering OA if an APC is paid. These APCs are typically well over US$1000. The fact that over 60% of journals in DOAJ do not charge any APC, and the low APCs of some high quality newer full service publishers (such as Ubiquity Press) shows that there is much room for improvement. In many fields there is considerable resistance to authors paying APCs directly. For example in a recent survey of mathematicians that we undertook, published in the European Mathematical Society Newsletter,
about a quarter of respondents declared APCs unacceptable in principle and another quarter said they should be paid by library consortia. We do not deny that there are costs associated with OA publishing, and are not advocating every journal run using self-hosted OJS and volunteer time (although there are many successful and long-lived journals of that type, like Journal of Machine Learning Research or Electronic Journal of Combinatorics, and we feel it still has untapped potential). We aim to ensure that unnecessary barriers are not erected for authors, in particular fees – Principle 4. Any payments on behalf of authors should be made in an automatic way – the idea is for consortia of institutions to fund reasonable operating costs of OA journals directly.

Principle 5 (reasonable and transparent costs) will automatically hold if the journal is sufficiently well run and independent as described by Principle 1, and is included in order to reinforce the point that a competitive market is our main goal rather than wasting public money to maintain the current profits of publishers. Recently, initiatives such as OA2020 have emphasized large-scale conversion of subscription journals to OA. We believe that if the ownership of the journals isn’t simultaneously changed, there will remain little incentive for publishers to keep prices down. If a researcher believes that a paper in Nature will make her career, will she be denied this by the APC-paying agency if Nature choose to charge a premium APC? In addition, if journal ownership is not taken from the publishers, they can lock us into their existing technologies, which hinders innovation in scholarly communication.

We are working on disciplinary organizations aimed at helping journals flip from a subscription model to Fair OA, and have so far started LingOA,  MathOA and PsyOA. We plan a Fair Open Access Alliance which will include independent journals already practising FairOA principles, flipped journals, and other institutional members with a strong belief in FairOA. The idea is to share resources and harmonize journal practices. We hope that these activities will yield a way forward that avoids sterile debates about Green vs Gold OA. We welcome feedback and offers of help in our wider effort to convert the entire scholarly literature to Fair Open Access.

 

Mark C. Wilson is Senior Lecturer in Computer Science at University of Auckland, and founding member of MathOA Foundation.

Alex O. Holcombe is an Associate Professor of Psychology at The University of Sydney and is a founding member of PsyOA (PsyOA.org).

Publishers prioritize “self-plagiarism” policing over allowing new discoveries

Elsevier and other publishers’ ability to detect “self-plagiarism” is an instance of text mining the world’s scientific literature. Over at two vision researcher mailing lists, there is much irritation at being asked to remove sentences that duplicate sentences that one wrote in previous papers, to describe for example the methodology of a study.

Tom Wallis pointed out that the automated text duplication checks also can be useful for detecting data duplication and fraud. Unfortunately it cannot easily be used for that by others – Elsevier shuts down independent researchers who use their journal subscriptions to investigate fraud (http://onsnetwork.org/chartgerink/2015/11/16/elsevier-stopped-me-doing-my-research/  ; http://www.nature.com/news/text-mining-block-prompts-online-response-1.18819).

Text mining the scientific literature could yield thousands of discoveries, about both fraud and new connections between molecules, genes, and diseases, but it can’t be done when publishers like Elsevier own the content and are trying to monetize it all for themselves (https://blogs.ch.cam.ac.uk/pmr/2017/07/11/text-and-data-mining-overview). “Self-plagiarism” also puts publishers at legal risk as a result of them publishing all our articles under restrictive copyright – it can be a copyright violation for them to publish text that happens to be identical to an earlier paper by the same author that happens to have been published by a different publisher. In an email from a publisher to Professor Peter Tse, the issue was framed as protecting the author but there was also this sentence: “Another issue to be borne in mind is the matter of copyright in extensive text duplication.”

Thus the traditional system of publishers owning the copyright to our work is both preventing new discoveries (which has to wait until the publishers find a way to use text mining to maintain or increase their profits) and creating ridiculous busywork for ourselves.  Yesterday I attended a university press publishing conference where Kevin Stranack of demo’ed Open Journal Systems version 3, which has already been released and looks significantly easier to use than ScholarOne/Manuscript Central, the system that expensive subscription journals use. The existence of OJS3 allows the creation of journals at very low cost (it already underpins thousands of journals, such as Glossa, which flipped from Elsevier) Unfortunately I seem to be the only researcher at the conference, but I’m tweeting about it and will add some related information to FairOA.org.