Introduction to reproducibility

A quick intro to reproducibility I’ve drafted for teaching, particularly for research students. Feedback welcome.

Good science is cumulative

Scientists seek to create new knowledge, often by conducting an experiment or other research study.

But science is much more than doing studies or analyzing data. Critical to the scientific enterprise is communication of what was done, and what was found.

GodfreyKneller-IsaacNewton-1689

Isaac Newton, who formulated the laws of motion and gravity, wrote that “If I have seen further it is by standing on the shoulders of giants.” Newton knew that science is cumulative – we build on the knowledge and results of previous researchers.

Robert Merton, a sociologist of science, described patterns of behavior and belief, or norms, that are widely shared by scientists. One of these that is critical to ensuring that science is cumulative is the norm of communalism. Robert_K_Merton Communalism refers to the notion that scientific methodologies and results are not the property of individuals, but rather should be shared with the world.

Knowing the details of a previous study is important for:

  1. Understanding the significance of its results

  2. Building on its methodology

  3. Confirming its findings

In some ways, this last purpose is the most fundamental. But across diverse sciences, ensuring that this can be done, as well as actually doing it, has been neglected. This is the issue of reproduciblity.

Reproducibility

Another scientific norm important to achieving reproducibility was dubbed organized skepticism by Merton. The critical community provided by other researchers is thought to be key to the success of science. The Royal Society of London for Improving Natural Knowledge, more commonly known as simply the Royal Society, was founded in 1660 and established many of the practices we today associate with science worldwide. The Latin motto of the Royal Society, “Nullius in verba”, is often translated as “Take nobody’s word for it”. 

Anyone can make a mistake, and most or all of us have biases, so scientific claims should be verifiable. The historian of science David Wooton has written that “What marks out modern science is not the conduct of experiments”, but rather “the formation of a critical community capable of assessing discoveries and replicating results.”

Types of reproducibility

Assessing discoveries and replicating results can involve two distinct types of activities. One can include examining the records associated with a study to check for errors. The second involves attempting to re-do the study and see whether similar data results that support the original claim.

The first type of activity is often referred to today with the phrase computational reproducibility. The word “computational” refers to taking the raw observations or data recorded for a study and re-doing the analysis that most directly supports the claims made.

The second activity is often referred to as replication. If a study is redone by collecting new data, does this replication study yield similar results to the original study? If very different results are obtained, this may call into question the claim of the original study, or indicate that the original study is not one that can be easily built upon.

Sometimes the word empirical is put in front of reproducibility or replication to make it clear that new data are being collected, rather than referring to computational reproducibility.

The replication crisis

The importance of reproducibility, in principle, has been recognized as critical throughout the history of science. In practice, however, many sciences have failed to adequately incentivize replication. This is one reason (later we will describe others) for the replication crisis.

The replication crisis refers to the discovery, and subsequent reckoning with, the poor success rates of efforts to computationally reproduce or empirically replicate previous studies.

replicationCrisisSurveyResults

The credibility revolution

The credibility revolution will be used in this unit to refer to the efforts by individual researchers, societies, scientific journals, and research funders to improve reproducibility. This has led to

  1. New best practices for doing individual studies

  2. Changes in how researchers and their funding applications are evaluated

  3. Greater understanding of how to evaluate the credibility of the claims of individual studies

The word credibility refers to how believable a theory or claim is. This reflects both how plausible it is before one hears of any relevant evidence, plus the evidence for the theory. Thus if a claim is highly credible, the probability that it is true is high.  The phrase credibility revolution is meant to signify that scientific reforms related to reproducibility among other things have boosted the credibility of many scientific theories and claims.

Advertisements

Just a list of our VSS presentations for 2019

Topics this year: Visual letter processing; role of attention shifts versus buffering (mostly @cludowici, @bradpwyble); reproducibility (@sharoz); visual working memory (mostly @will_ngiam)

Symposium contribution, 12pm Friday 17 May:  Reading as a visual act: Recognition of visual letter symbols in the mind and brain

Implicit reading direction and limited-capacity letter identification

ebmocloH xelA, The University of Sydney
(abstract now has better wording)
I would like to congratulate you for reading this sentence. Somehow you dealt with a severe restriction on simultaneous identification of multiple objects – according to the influential “EZ reader” model of reading, humans can identify only one word at a time. Reading text apparently involves a highly stereotyped attentional routine with rapid identification of individual stimuli, or very small groups of stimuli, from left to right. My collaborators and I have found evidence that this reading routine is elicited when just two widely-spaced letters are briefly presented and observers are asked to identify both letters. A large left-side performance advantage manifests, one that is absent or reversed when the two letters are rotated to face to the left instead of to the right. Additional findings from RSVP (rapid serial visual presentation) lead us to suggest that both letters are attentional selected simultaneously, with the bottleneck at which one letter is prioritized sited at a late stage – likely at an identification or working memory consolidation process. Thus, a rather minimal cue of letter orientation elicits a strong reading direction-based prioritization routine. Our ongoing work seeks to exploit this to gain additional insights into the nature of the bottleneck in visual identification and how reading overcomes it.

 

Is there a reproducibility crisis around here? Maybe not, but we still need to change.

Alex O Holcombe1, Charles Ludowici1, Steve Haroz2

Poster 2:45pm Sat 18 May

1School of Psychology, The University of Sydney

2Inria, Saclay, France

Those of us who study large effects may believe ourselves to be unaffected by the reproducibility problems that plague other areas. However, we will argue that initiatives to address the reproducibility crisis, such as preregistration and data sharing, are worth adopting even under optimistic scenarios of high rates of replication success. We searched the text of articles published in the Journal of Vision from January through October of 2018 for URLs (our code is here: https://osf.io/cv6ed/) and examined them for raw data, experiment code, analysis code, and preregistrations. We also reviewed the articles’ supplemental material. Of the 165 articles, approximately 12% provide raw data, 4% provide experiment code, and 5% provide analysis code. Only one article contained a preregistration. When feasible, preregistration is important because p-values are not interpretable unless the number of comparisons performed is known, and selective reporting appears to be common across fields. In the absence of preregistration, then, and in the context of the low rates of successful replication found across multiple fields, many claims in vision science are shrouded by uncertain credence. Sharing de-identified data, experiment code, and data analysis code not only increases credibility and ameliorates the negative impact of errors, it also accelerates science. Open practices allow researchers to build on others’ work more quickly and with more confidence. Given our results and the broader context of concern by funders, evident in the recent NSF statement that “transparency is a necessary condition when designing scientifically valid research” and “pre-registration… can help ensure the integrity and transparency of the proposed research”, there is much to discuss.

 

Talk saturday 18 May 2.30pm

A delay in sampling information from temporally autocorrelated visual stimuli
Chloe Callahan-Flintoft1, Alex O Holcombe2, Brad Wyble1
1Pennsylvania State University
2University of Sydney
Understanding when the attentional system samples from continuously changing input is important for understanding how we build an internal representation of our surroundings. Previous work looking at the latency of information extraction has found conflicting results. In paradigms where features such as color change continuously and smoothly, the color selected in response to a cue can be as long as 400 ms after the cue (Sheth, Nijhawan, & Shimojo, 2000). Conversely, when discrete stimuli such as letters are presented sequentially at the same location, researchers find selection latencies under 25 ms (Goodbourn & Holcombe, 2015). The current work proposes an “attentional drag” theory to account for this discrepancy. This theory, which has been implemented as a computational model, proposes that when attention is deployed in response to a cue, smoothly changing features temporally extend attentional engagement at that location whereas a sudden change causes rapid disengagement. The prolonged duration of attentional engagement in the smooth condition yields longer latencies in selecting feature information.
In three experiments participants monitored two changing color disks (changing smoothly or pseudo-randomly). A cue (white circle) flashed around one of the disks. The disks continued to change color for another 800 ms. Participants reported the disk’s perceived color at the time of the cue using a continuous scale. Experiment 1 found that when the color changed smoothly there was a larger selection latency than when the disk’s color changed randomly (112 vs. 2 ms). Experiment 2 found this lag increased with an increase in smoothness (133 vs. 165 ms). Finally, Experiment 3 found that this later selection latency is seen when the color changes smoothly after the cue but not when the smoothness occurs only before the cue, which is consistent with our theory.

 

Poster 2pm 20 May

Examining the effects of memory compression with the contralateral delay activity
William X Ngiam1,2, Edward Awh2, Alex O Holcombe1
1School of Psychology, University of Sydney
2Department of Psychology, University of Chicago
While visual working memory (VWM) is limited in the amount of information that it can maintain, it has been found that observers can overcome the usual limit using associative learning. For example, Brady et al. (2009) found that observers showed improved recall of colors that were consistently paired together during the experiment. One interpretation of this finding is that statistical regularities enable subjects to store a larger number of individuated colors in VWM. Alternatively, it is also possible that performance in the VWM task was improved via the recruitment of LTM representations of well-learned color pairs. In the present work, we examine the impact of statistical regularities on contralateral delay activity (CDA) that past work has shown to index the number of individuated representations in VWM. Participants were given a bilateral color recall task with a set size of either two or four. Participants also completed blocks with a set size of four where they were informed that colors would be presented in pairs and shown which pairs would appear throughout, to encourage chunking of the pairs. We find this explicit encouragement of chunking improved memory recall but that the amplitude of the CDA was similar to the unpaired condition. Xie and Zhang (2017; 2018) previously found evidence that familiarity produces a faster rate of encoding as indexed by the CDA at an early time window, but no difference at a late time window. Using the same analyses on the present data, we instead find no differences in the early CDA, but differences in the late CDA. This result raises interesting questions about the interaction between the retrieval of LTM representations and what the CDA is indexing.

 

Poster Tues 21 May 245pm

Selection from concurrent RSVP streams: attention shift or buffer read-out?
Charles J H Ludowici, Alex O. Holcombe
School of Psychology, The University of Sydney, Australia
Selection from a stream of visual information can be elicited via the appearance of a cue. Cues are thought to trigger a time-consuming deployment of attention that results in selection for report of an object from the stream. However, recent work using rapid serial visual presentation (RSVP) of letters finds reports of letters just before the cue at a higher rate than is explainable by guessing. This suggests the presence of a brief memory store that persists rather than being overwritten by the next stimulus. Here, we report experiments investigating the use of this buffer and its capacity. We manipulated the number of RSVP streams from 2 to 18, cued one at a random time, and used model-based analyses to detect the presence of attention shifts or buffered responses. The rate of guessing does not seem to change with the number of streams. There are, however, changes in the timing of selection. With more streams, the stimuli reported are later and less variable in time, decreasing the proportion reported from before the cue. With two streams – the smallest number of streams tested – about a quarter of non-guess responses come from before the cue. This proportion drops to 5% in the 18 streams condition. We conclude that it is unlikely that participants are using the buffer when there are many streams, because of the low proportion of non-guesses from before the cue. Instead, participants must rely on attention shifts.

 

survey of vision researchers: 2016 results on open access

Salvaged from an early Feb 2016 Google+ posting:
A vision researcher discussion happened on a semi-private email group (CVnet), but you can see some discussion on the visionlist archive (by moving around here: http://visionscience.com/pipermail/visionlist/2016/009312.html
), and below you can see the results of the survey.

Dear vision researchers,

A while ago I circulated a survey about open access and publishing, one that was oriented largely towards the issues raised in the initial CVnet emails. The survey was only open for a few weeks, but 380 of you responded.

Here are the raw data: https://docs.google.com/spreadsheets/d/1tfpSVeLflOG4moGvhHlT2SivnW5Rqw-upGrwLZkqEcA/edit?usp=sharing and here is an automatic Google-generated summary: https://docs.google.com/forms/d/1vhKwMkTCpm3DZGq2SGmd8_cNXXBv344Lo8XWtyDQXho/viewanalytics

I don’t want to be seen as biasing interpretation of the survey, but it seems safe to say that the large number of responses, and the data, show that many of us have opinions about these issues and want something done. The first question was “Which financial/organizational aspect of journals should be the community’s top priority?” and of the six options provided, the most popular answer was

“open access”, with 132 responses
“Full academic or professional society control” was 2nd with 78 responses
“Low cost” was 3rd, with 61 responses

To “What should the vision community do NOW?”, 1st was
“Make a change (choosing this will lead to some possible options)” with 353 votes
“Nothing, carry on as normal” was the other option and received 24 votes.

Those 353 pro-change respondents were shown multiple options for change, and could choose more than one. There was a strong vote for several, with the leading ones being
“Encourage the VR Editorial Board to jump ship” with 164 votes and
“Encourage the JoV Editorial Board to jump ship” with 160 votes.
Note there was also significant support for the MDPI Vision journal (137 votes) and
“Redirect our submissions and support to i-Perception” (106 votes).

To “What should the academics on the editorial boards of overpriced journals (be they subscription or open access) do?”,
“Work with the publisher to reform the journal itself” had 214 votes, followed by
“Wait until a majority or supermajority of editors agree to resign, and then resign en masse, with a plan agreed among the editors to join or start something else” with 90 votes

There was one other question, about desired features of journals; please go to the data to check out the options and responses https://docs.google.com/forms/d/1vhKwMkTCpm3DZGq2SGmd8_cNXXBv344Lo8XWtyDQXho/viewanalytics

Given the large number of responses and the overwhelming vote for “make a change” (93%), I hope that the editors of our journals will respond to this survey data and the related CVnet discussion, such as how authors without funds can publish in OA. Very likely, the editorial boards of journals have been discussing these issues behind the scenes for a few weeks, and it is understandable that reaching consensus on how a journal can respond will take time. As a result, editors’ responses are likely to occur at different times, resulting in a wandering discussion that will exhaust many of us and might focus criticism or praise overmuch on an individual journal.

So that our discussion is less piecemeal, the CVnet moderator, Hoover Chan, has agreed that if editors send their responses directly to him, he will collate the responses and send them out as a batch on 21 February (3 weeks from now).

Most of the discussion so far has centred on JoV and Vis Res, but there are other vision journals, such as Perception/iPerception (which it was nice to hear from just now), AP&P; Frontiers Perception Science; MDPI Vision; Multisensory Research and JEP:HPP; it would be good if we could have responses from all of them.

Perhaps the most salient question raised both by the survey responses and the CVnet discussion is exactly why each journal is as expensive/cheap as it is, particularly its open access option, and whether each journal will provide transparent accounting of costs. Given that the data indicate that “Full academic or professional society control” is a high priority, editors should also comment on the ability of themselves and the rest of us to affect their journal’s policies, features and cost.

Photo

Psychonomics Society and Perception/i-Perception on open access

A post from 27 Feb 2016 salvaged from Google+:

A discussion about the high author fees charged by some #openaccess journals brought up many other issues, some of which were included in a survey. Nearly 400 vision researchers responded to the survey, of which 93% expressed desire for change. See the detailed response breakdown here: https://plus.google.com/u/0/+AlexHolcombe/posts/71QRT2grZKt .

When I reported these survey results to the community mailing list (CVnet), I invited journal editors and publisher representatives to respond, and that their responses would be sent out after 3 weeks. Here are their responses:

From: Cathleen Moore (cathleen-moore@uiowa.edu)

I am writing on behalf of the Psychonomics Society in regard to the recent journal survey results that have been distributed throughout our community. We would like to offer the following statement as the outcome of discussions within the Executive Committee and the Publications Committee. We would be grateful if you would include in this your communications to the community regarding the recent survey results and surrounding discussion.

The mission of the Psychonomic Society is “…the communication of scientific research in psychology and allied sciences.”
(http://www.psychonomic.org/about-us). That is, communication of the science is the very purpose for our existence. As such, the Psychonomic
Society is committed to making membership in the society, the annual meeting, and all of our journals affordable for all. Open-access publishing is one aspect of the Society’s commitment, as evident in the establishment of our new open-access journal Cognitive Research: Processes and Implications.

Discussions about open access and other models of publishing are ongoing, and will be part of the formal agenda at future meetings of the Governing
Board later this year.

Sincerely,

Cathleen Moore
Chair, Governing Board

In consultation with:
Aaron Benjamin, Chair Elect
Bob Logie, Past Chair
Fernanda Ferreira, Publications Committee Chair
——————————————————-
From: Dennis Levi (dlevi@berkeley.edu)

The topic of open access will be a major discussion issue for the JOV [Journal of Vision] board meeting at VSS in May.
————————————————————————–
From: Timothy Meese (t.s.meese@aston.ac.uk)

Dear Vision Scientists

We at i-Perception and SAGE are pleased to respond to the issues raised in the recent discussion of open access on CVNet. We circulated a general response over CVNet shortly before Alex Holcombe circulated the results of his survey and the invitation to respond to those. We have appended our earlier circular to this email for completeness and for contact details.

** SURVEY RESPONSE **

OPEN ACCESS (OA)
i-Perception (iP) is a fully open access journal with papers published under a CC-BY license. As the survey was about OA, most of our response relates to iP. However, iP’s sister print journal Perception (P) includes some material that is also open access. We list that here for completeness: Editorials, the Perception lecture (from ECVP), some conference abstracts, and some of the back archive. The journal Perception can be accessed here: http://pec.sagepub.com

COSTS
Costs at iP are clearly competitive (375 GBP [~ 568 USD] for regular articles, see below for further details.) We can confirm that these costs will be fixed through 2016. They will be reviewed in 2017 to ensure ongoing viability for all stakeholders.

JOURNAL REFORM AND FULL ACADEMIC CONTROL
Regarding academic control, iP Chief Editors meet with SAGE three times a year, and there is also an annual Editorial/Advisory Board meeting at ECVP, with a representative from SAGE. It is our impression, confirmed during our board meetings at ECVP, that iP is not viewed as overpriced and that reform on this matter is not being sought at this time. However, we add that the Chief Editors will do what they can to keep costs down. We would also like to point out that the Chief Editors are always open to suggestions (by email or in person), which can be taken forward to management board meetings for further discussion. Although subject to certain constraints (e.g. the limitations of generic company software packages such as ScholarOne), we have found SAGE to be very accommodating to our requests and suggestion thus far.

TRANSPARENT ACCOUNTING
The sum that an author pays for publication has two components: 1. Internal production costs (non profit). 2. Profit. For large organisations, isolating item 1 is quite tricky—for example, should this be averaged over all the publisher’s journals or just the relevant journal? As different journals adopt different approaches, comparisons of components 1 and 2 are likely to be problematic. However, the TOTAL cost that an author pays (page charges, OA/CC-BY charges, any other charges and fees) allows for unambiguous comparisons.

OPEN REVIEW AND POST-PUBLICATION PEER REVIEW
At present SAGE do not do this for any of their journals. There is no immediate plan to do so, but SAGE are keeping their eye on the situation for open review. As for post-peer review, we do have a section in iP called ‘Journal Club’ which is intended for published discussion of other people’s publications. This, we believe, is the best way to implement relevant post-publication peer review.

REGISTER REPORT FORMAT
This allows authors to register the format of their study before data are gathered. This can be valuable in justifying the chosen statistical analysis and also for reporting null results. This is something that several SAGE journals do and will be a subject for discussion between SAGE and the chief editors of iP at their next managerial board meeting in June.

OPENNESS BADGES (COS) https://osf.io/tvyxz/wiki/home/
This was first raised at our Editorial/Advisory Board meeting held at ECVP in 2013 and then discussed in detail by that board the following year after circulating a detailed paper on the matter. While the value of these badges was acknowledged for other journals and disciplines, their value for vision/perception research was viewed as questionable and there was little or no enthusiasm at the meeting for adopting the COS badges at that time. However, that decision is open to review, particularly in light of the item above.

OPEN JOURNAL TIME FRAMES
At present, SAGE do not report submission and acceptance dates for articles in iP, but this is something that will change in the near future. We are also looking into whether it is possible to make average review times for the journal available on the website.

COPE MEMBERSHIP
COPE membership for Perception and i-Perception is currently being processed and we expect to be able to acknowledge this on the website very soon.

COPYEDITING
SAGE provide copyediting and typesetting. Authors see the copyedited and typeset proofs for any final corrections before publication.

LATEX
We will be able to accept LaTeX submissions very soon.

ALERTS
To register with iP and/or sign up to receive an email alert for each new issue go here: http://ipe.sagepub.com/cgi/alerts

Signed
Chief editors of Perception and iPerception:
Tim Meese
Peter Thompson
Frans Verstraten
Johan Wagemans
SAGE:
Ellie Craven

________________________________________________
APPENDIX A (The email below was first circulated over CVNet on 1st February 2016)

There is a new OA journal already…

…It is i-Perception.

WHAT IT IS
i-Perception (or iPerception) was founded in 2010, and is the OA, peer-reviewed, online sister journal to the long-running print journal, Perception, founded by Richard Gregory in 1972.

BACKGROUND
For many years both journals were owned by the UK publisher Pion but have recently been taken on by SAGE. As editors, we have enjoyed positive relations with both publishers regarding all aspects of the journal. Although the shift to SAGE has meant the loss of the much beloved submission system, PiMMS (we now use ScholarOne) and ‘paper icons’ on the contents page of iPerception, we are now enjoying the benefits of efficiency and outreach that comes with a larger publisher, and one that we have found to be sensitive to the needs and views of the journals’ editors and authors.

Perception and iPerception (we often abbreviate the two journals to PiP) are journals run by vision/perceptual/sensory scientists for vision/perceptual/sensory scientists. For example, PiP have a long standing history in supporting major vision conferences (APCV, ECVP, VSS), but particularly ECVP, where they are the chosen outlet for published conference abstracts and sponsors of the keynote ‘Perception’ lecture on the opening evening.

REMIT
The remit of both journals is the same: any aspect of perception (human, animal or machine), with an emphasis on experimental work; but theoretical positions, reviews and comments are also considered for publication (see website below for details of the various paper categories). Although the majority of the papers published are on visual perception, all other aspects of perception are also covered, including multi-modal studies, giving it a broader remit than either VR or JoV. PiP is sometimes thought to focus on phenomenology (owing to the interests of its founding editor, we think), but hardcore psychophysics is also found within its pages, and much of what is published in VR or JoV would not be out of place in PiP.

EDITORIAL BOARD
Although the two journals are independent (e.g. they have their own impact factors; the IF for iP is 1.482), they are overseen by a common international editorial board who can be found by following the third link at the bottom of this page. An editorial board meeting takes place annually at ECVP.
The four chief editors (based in Europe/Australia, see below) and the administrative manager (Gillian Porter, based in Bristol, UK) hold managerial board meetings three times a year with SAGE (based in London, UK) and enjoy a close working relationship with an open door (email) policy.

COPYRIGHT AND OPEN ACCESS
Papers in iP are published under a CC-BY license (https://en.wikipedia.org/wiki/Creative_Commons_license) (this is Gold OA). Papers in PiP are also branded Romeo Green (http://www.sherpa.ac.uk/romeoinfo.html). (This is a different branding system from the more familiar Gold/Green OA terminology, and the two should not be confused.) Romeo Green status enables authors to archive their accepted version in their institutional repository, their departmental website, and their own personal website immediately upon acceptance. This is the most open publishing policy possible, of which SAGE (and we) are justly proud.
If your library does not stock Perception, you might think to request that they do—the bundles with which it is included are likely to have changed with the change in publisher (to SAGE).

COSTS
There is no cost to publishing in Perception.
The cost for publishing in iPerception is a single charge (on acceptance of the paper), depending on paper type as follows:
Regular Articles = 375 GBP (~ 568 USD)
Short Reports = 200 GBP (~ 303 USD)
Translation Articles = 200 GBP (~ 303 USD)
Short and Sweet = 150 GBP (~227 USD)
Journal Club = 150 GBP (~227 USD)
iReviews = no charge.

VAT (value added tax) at 20% is added to the costs above if the paying party is in the European Union, to comply with European Law. Non-UK institutions are exempt from VAT if they can provide a VAT registration number.

THE FUTURE
We have been watching the debate on OA over CVNet with interest; we agree that a low cost OA option is a desirable forum for our community—preferably one based around a dedicated journal so as to provide a sense of ‘home’ rather than an unfocussed (rebellious, even) out camp—; we hope you will join us to help bring iP towards the forefront of that endeavour.

JOURNAL LINKS

To see content of iPerception follow the link below…
http://ipe.sagepub.com/

To submit to Perception or iPerception follow the link below to ScholarOne…
https://mc.manuscriptcentral.com/i-perception

To see details about iPerception follow the link below…
https://uk.sagepub.com/en-gb/eur/i-perception/journal202441

Signed (Chief editors of Perception and iPerception)
Tim Meese
Peter Thompson
Frans Verstraten
Johan Wagemans

Several outside organizations associated with journals have asked about this.…
Several outside organizations associated with journals have asked about this.…

Is red special? There’s a big literature on this, but

An old post salvaged from the dying Google+:

Is red special? There’s a big literature on this, but I haven’t seen any studies investigate this with quality methodology.

A number of studies purport to investigate whether the color red is special, like whether “red test materials lower performance”. Two problems: 1) these studies may be impossible to replicate because the authors tend not to report the colors in a device-independent fashion! 2) I fail to see how you can compare a single red (or even two or three) to a single green or blue and then conclude something general about red, e.g. “red things are easier to remember”.

We made this point in a recent critical commentary on a particular offending paper: https://f1000research.com/articles/5-1778/v1

Below I’ve pasted bits of some reviews I’ve had to write explaining these issues in more detail, in the hope future studies will improve their methodology:

This is a typical psychology manuscript on color in that it makes the usual mistakes, primarily mistakes associated with not specifying the colors in a device-independent fashion. The cornerstone of science is (supposedly) replicability, and for a study to be replicable the critical characteristics of the stimuli must be reported. This manuscript, like many others in cognitive and social psychology, reports colors in such a way that one could only replicate the colors if one had the exact equipment (monitors) that were used in this study. That is, the colors are reported as RGB coordinates Ordinarily I would recommend rejection for this because it means the manuscript did not meet the PLoS ONE criterion of “3. Experiments, statistics, and other analyses are performed to a high technical standard and are described in sufficient detail.” For some sorts of studies, I think it is somewhat appropriate to let this slide, because most studies aren’t actually making claims about particular colors but rather just choosing various colors whose values aren’t critical. However in this case the study is actually about color!

A general point about papers that study putative effects like the difference between red and other colors is that it may be folly to compare one particular red to one particular green and one particular white as this study did and then claim that you showed something about the difference between red and green! There are hundreds of reds and hundreds of greens. Why would one think that comparing a particular red to a particular green would generalize to all such pairs, or even all the other close-to-focal or unique reds and greens? To establish a claim like that “red decreases performance” rather than “Red with CIE x,y decreases performance relative to use of green CIE x,y” seems to me to require testing of a lot of reds and a lot of greens.

Another unfortunate aspect of the effect reporting here that I frequently see in psychology manuscripts is the use of only noise (random variance)-scaled effect sizes. These measures, scaled by the noise in the experiment, were primarily designed for areas where the meaning of an absolute change in test scores is unknown. For a score on a standardized test, getting a certain absolute score lower or higher actually has a somewhat-known meaning. If psychology is ever to be cumulative, it should use these absolute numbers rather than reporting only numbers that are scaled by the haphazard error variance that happened to have occurred in that particular experiment with that particular population. The reason is that you may have more noise in your study because of more distractible participants or more heterogenous population of participants, and this would reduce the size of the noise-scaled effect size. The main thing of interest should be what you are actually measuring, actual difference in scores, not difference in scores divided by variance that includes things like general heterogeneity of the groups.

Do dogs have color categories?

An old post salvaged from the dying Google+

I used to think diagrams like the above were probably accurate in showing dichromats, like dogs, having a gradient between only two qualitatively distinct colors at most (of course, what those particular colors are qualitatively is unknowable; surely they don’t feel like our blue and yellow). But from Wachtler, Dohrmann, & Hertel (2004) I learned that human dichromats claim to see not a smooth continuum, but multiple qualitatively distinct bands.

Some of the qualitatively different colors we see in the spectrum reflect post-receptoral mechanisms that dogs might also have. If so, you should be able to find a boost in dog learning performance (but not discrimination, which reflects the receptor space) for two colors on either side of a band boundary. Don’t know if this has been done.

 

Comment by Jonathon Cohen of UCSD: A really interesting paper arguing for a particular version of this kind of revisionism about the standard line on what the world looks like to dichromats is Justin Broackes, “What Do the Colour-blind See?”, in J. Cohen & M. Matthen, eds., Color Ontology and Color Science (MIT Press, 2010). Highly recommendable.

Planning for PlanS

Several European funders announced planS, which suggests that several European nations will ban grantees form publishing in paywalled journals (including hybrid journals that allow one to make an article open access for a fee), and that includes many society journals that are owned by societies but published by large subscription-based publishers. This is to begin in 2020.

Many open access journals charge an APC or article processing charge. Then, open access increases dissemination and readership but can shut out poor authors. This issue is one reason I favor the overlay journal model (here is an example), which can operate at very low cost.  For overlay journals, the manuscript uploading and hosting issues are off-loaded to servers such as PsyArxiv. Management of the peer review flow can be handled for free by university-hosted OJS software (I think; I’m not aware of an overlay journal being managed by OJS – anyone know of any?), or with an external professional service such as Scholastica, which charges $10/article (here is Scholastica’s entry in our guide to low-cost publishers). At a low cost like that, sufficient funds should be available from various sources, such that poor authors don’t have to pay anything.

Unfortunately, many societies have become dependent on money that comes from restricting dissemination of its members’ research. One can anticipate a lot of resistance from these societies to open access for that reason. AAAS, which publishes Science, has already resisted.  For societies that are less conservative (such as the Association for Psychological Science, which I am an associate editor for), how should they be lobbied? What realistic goal should we be pushing for? I’m still thinking through the path(s) that should be taken.

PlanS has yet to be fully spelled out. It is possible (and hoped by many) that researchers will be able to satisfy the mandate via green OA (uploading their manuscript to a server such as PsyArxiv, or their institutional repository) rather than prohibition on publishing in a certain type of journal.  The money that was previously going to fat subscription publishers, sometimes in the form of APCs, would still be cut off, and the alternate publishing infrastructure associated with green OA would be boosted. This would hasten a transition away from dependence on journals for dissemination, and thereby lower the cost that journals can charge, as a subscription or as an APC. We can anticipate that peer review facilities, both pre- and post-publication, will become more and more available for “preprint” servers, unleashing lower costs as well as new innovation in what a journal is.

FYI, PsyOA.org is a resource (together with LingOA and MathOA) that we created to help journals flip to open access. And Publishing Reform is an open discussion forum for discussion of this and many other issues.