Color space pictured and animated (Derrington Krauskopf Lennie)

The Derrington, Krauskopf and Lennie (1984) color space is based on the Macleod-Boynton (1979) chromaticity diagram. Colors are represented in 3 dimensions using spherical coordinates that specify the elevation from the isoluminant plane, the azimuth (the hue) and the contrast (as a fraction of the maximal modulations along the cardinal axes of the space).

It’s easier for me to think of a color in cartesian DKL coordinates with the dimensions:

  • Luminance or L+M, sum of L and M cone response
  • L-M, difference of L and M cone response
  • S-(L+M), S cone responses minus sum of L and M cone response

The three classes of cones respond a bit to almost all colors, but some reds excite L cones the most, some greens M cones the most, and some blues S cones the most.

I’ve created the below movie (thanks Jon Peirce and PsychoPy) to show successive equiluminant slices of DKL color space, plotted with cartesian coordinates. These render correctly on my CRT screen, but the colors will be distorted on any other screen. Nevertheless it helps you get a feel for the gamut (colors that can be represented) of a typical CRT at each luminance, where -1 is the minimum luminance of the CRT and +1 is its maximum. The letters R,G,B and accompanying numbers show the coordinates of the phosphors (each gun turned on by itself).

Derrington AM, Krauskopf J, & Lennie P (1984). Chromatic mechanisms in lateral geniculate nucleus of macaque. The Journal of physiology, 357, 241-65 PMID: 6512691

MacLeod DI, & Boynton RM (1979). Chromaticity diagram showing cone excitation by stimuli of equal luminance. Journal of the Optical Society of America, 69 (8), 1183-6 PMID: 490231

Position available: postdoc, or highly-qualified research assistant

We invite applications for a research fellowship/postdoctoral research fellowship working with Dr. Alex Holcombe in the School of Psychology at the University of Sydney. The research area is visual psychophysics, and the project involves the perception and attentive tracking of moving objects. One line of experiments will investigate the limits on judging the spatial relationship of moving objects. See http://www.psych.usyd.edu.au/staff/alexh/ for more on the laboratory’s research.

University of Sydney

University of Sydney main quadrangle

The University of Sydney is Australia’s first university with an outstanding global reputation for academic and research excellence. The School of Psychology is Australia’s first established psychology department and boasts a proud history of excellence that characterises the entirety of its research and educational activities. You would be working with a dynamic community of local vision researchers (see http://www.physiol.usyd.edu.au/span/  ) and attend seminars and colloquia in perception and related fields.

An essential requirement for the Postdoctoral Research Associate position is a PhD in psychology, vision science, or similar field, and a demonstrated ability to conduct vision research (BSc Honours or equivalent if at the Research Associate level). An understanding of psychophysical and psychology experiment design is also essential. Conducting the experiments will require skill in programming visual perception experiments and experience analysing data from psychophysical and/or psychology experiments, using a command-line tool (as opposed to Excel or SPSS) such as R, MATLAB, or program code in Python, C, etc. Preference may be given to individuals with experience with PsychoPy and R.

The position is full-time fixed term for 16 months subject to the completion of a satisfactory probation period for new appointees. There is the possibility of further offers of employment of up to 12 months, subject to funding and need. Membership of a University approved superannuation scheme is a condition of employment for new appointees.

Remuneration package: up to $92k p.a. (currently ~$90k USD), consisting of a base salary, leave loading and up to 17% employer’s contribution to superannuation). Some assistance towards relocation cost and visa sponsorship may be available for the successful appointee if required. Level of appointment will be commensurate with experience and qualifications.

All applications must be submitted via The University of Sydney careers website. Visit http://sydney.edu.au/positions/ and search by the reference number,  3856/1110 3846/1110,  for more information and to apply.

CLOSING DATE: 13 January 2011 (11:30PM Sydney time) with interview to be scheduled in the beginning of February.
The University is an Equal Opportunity employer committed to equity, diversity and social inclusion. Applications from equity target groups and women are encouraged. The University reserves the right not to proceed with any appointment.

UPDATE: human resources here gave me the wrong reference number, which has now been corrected above

An interactive turnkey tutorial to give students a feel for brain-style computation

A new version of my 100-minute interactive neural network lesson is available. The lesson webpages guide university-level students through learning and directed play with a connectionist simulator. The outcome is that students gain a sense of how neuron-like processing units can mediate adaptive behavior and memory.

Making the lesson was fairly straightforward, thanks to Simbrain, a connectionist simulator which is the easiest to use of any I’ve seen. After a student downloads the Java-based Simbrain software and double-clicks, she is on her way with menu-driven and point-and-click brain hacking.

New to Simbrain and my lessons this year are animated avatars that depict the movements of the virtual organism controlled by the student’s neural network. This feature, added to the software by Simbrain creator Jeff Yoshimi, provided a nice lift to student engagement compared to previous versions. The first lesson is mainly intended to bring students to understand the basic sum-and-activate functioning of a simplified neuron and how wiring such units together can accomplish various things. It’s couched in terms of a guy chasing girls scenario, so that most university students can relate. The second lesson gives students a rudimentary understanding of pattern learning and retrieval with a Hebbian learning rule.

Going through the lessons is certainly not as fun as playing a video game, but it is more interesting and interactive than a conventional university tutorial. In the long term, I hope we can make it more and more like a (educational) video game. Some say that’s the future of education, and whether or not that’s true in general, I think neural networks and computational neuroscience are ideally suited for it.

Development of the Simbrain codebase has been gathering steam, although that may not be apparent from the Simbrain website because portions of it haven’t been updated for awhile. Lately, a few people have jumped in to help with development.

Already the code is developed enough to provide all the functionality needed for school- and university-level teaching. Giving a series of classes with it could easily be done at this point. However, to do so you’d have to take time to develop associated lecture material and refine the example networks. If you don’t have time for that, you should consider simply using my lessons to give students a taste.

The lessons have been battle-tested on a few hundred psychology students at University of Sydney, without any significant problems. The tutors (aka demonstrators or teaching assistants) took the introductory slides provided and used them to help orient the students before they started following the web-based instructions for the lessons. Contact us if you want to take it further.

technical note: d-prime proportion correct in choice experiments (signal detection theory)

If you don’t understand the title of this post, you almost certainly will regret reading further.

We’re doing an experiment in which one target is presented along with m distracters. The participant tries to determine which is the target, and must respond with their best guess regarding which is it. Together the m distracters + 1 target = “number of alternatives”.

In the plots shown are the predictions from vanilla signal detection theory for the relationship between probability correct, d-prime, and number of alternatives. Each distracter is assumed to have a discriminability of d-prime from the target.

signal detection theory relationship among percent correct, d-prime, number of alternatives
The two plots are essentially the inverse of each other.

Note that many studies use two-interval forced choice wherein the basic stimulus containing distracters are presented twice, one with the signal and the participants has to choose which contained the signal. In contrast, here I’m showing predictions for an experiment wherein the target with all its distracters is only presented once, and the participant reports which location contained the target.

I should probably add a lapse rate to these models, and generate curves using a reasonable lapse rate like .01.

I’ll post the R code using ggplot that I made to generate these later; email me if I don’t or you want it now. UPDATE: the code, including a parameter for lapse rate.

reference: Hacker, M. J., & Ratcliff, R. (1979). A revised table for d’ for M-alternative forced choice, 26(2), 168-170.
#To determine the probability of target winning, A, use the law of total probability:
# p(A) = Sum (p(A|B)p(B)) over all B
# Here, B will be all possible target TTC estimates and p(A|B) will be probability distracters
# are all lower than that target TTC estimate, B

# x is TTC estimate of distracter
# Probability that distracter TTC estimate less than target is pnorm(x): area under curve
# less than x.
# m: number of objects, m-1 of which are distracters
# p(A|B)*p(B) = pnorm(x)^(m-1) * dnorm(x-dprime)
# Hacker & Ratcliff, 1979 and Eliot 1964 derive this as did I
# Jakel & Wichmann say that "numerous assumptions necessary for mAFC" where m>2 but not clear
# whether talking about bias only or also about d'

Show me the data! A step backwards by the Journal of Neuroscience

More and more researchers agree that more access is needed to the original data behind published research articles. Of course, the more general point is that not just the original data, but all materials needed to scrutinize the claims of a manuscript should be available.

The policy of the journal Science is:

After publication, all data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science… Large data sets with no appropriate approved repository must be housed as supporting online material at Science, or only when this is not possible, on an archived institutional Web site, provided a copy of the data is held in escrow at Science to ensure availability to readers.

There are many cases where aspects of the data cannot be made available, for example due to patient privacy requirements, and Science of course makes allowances for that.

But In contrast to Science‘s enlightened policy, the Journal of Neuroscience looks to have just taken a step backward. They announced that they will no longer allow any supplemental material to be submitted along with the main text of authors’ articles.

Supplemental materials were being used to include a broad array of things that help to interpret the content of an article. These things are really needed to increase the transparency of science. Still, I do sympathize a bit with the desire of J Neurosci to do away with them. Often they are used in a very annoying fashion. Authors sometimes put critical data analyses and experiments in the supplemental material, and as a reader it becomes really difficult to read an article when one has to switch between the often overly-concise main text and the sometimes poorly organized supplemental material. J Neurosci points out that evaluating these materials was often a significant burden on the reviewers. However, I don’t think that simply eliminating supplemental materials is an appropriate response.

Elimination of supplemental materials should be accompanied by a clear policy on how the information otherwise therein will be made available to readers, to ensure the integrity of science. The announcement does make a gesture in that direction, saying that authors should provide a weblink to information on their own site, and that perhaps the elimination of supplemental material will “motivate more scientific communities to create repositories for specific types of structured data, which are vastly superior to supplemental material as a mechanism for disseminating data.” However, overall the announcement gives the impression that if anything, the recommendation that authors provide supporting data and material has been relaxed. Heather Piwowar has more.

J Neurosci should have accompanied this change with a statement that the expectation that authors fully back up their claims with public information is increasing, not decreasing!

In the longer term, I believe the solution to all this may come from more fundamental reform of how science is communicated. Ideally, the workings of a scientific enterprise and the progress being made should be visible even before formal publication. This is called Open science.

The development of better open science tools will obviate some of the concerns of the Journal of Neuroscience. Formal publication of an article will not be as big a step as it is now, where suddenly all of the materials associated with a scientific claim appear out of nowhere. Instead, there will be an electronic paper trail linking back to the data records created as the data were collected and the analyses were done. There are many reasons why currently scientists do not or cannot do such things, but those who can and are making the effort to do so should be applauded and supported.

Explaining temporal resolution with water-works of the visual system

Most people are confused about temporal resolution. That includes my students, and even BBC science programmes. So I created this diagram to communicate the basic concept, with the example of human visual processing, using a water-works metaphor.

Why water-works? I’m trying to explain an unfamiliar concept in terms that everyone can understand intuitively. By using a hydraulic metaphor for the nervous system, I’m following in the footsteps of Descartes, who in the 1600s knew almost nothing about the brain or even nerves but nevertheless had a pretty good notion of what was going on:

And truly one can well compare the nerves of the machine that I am describing to the tubes of the mechanisms of these fountains, its muscles and tendons to diverse other engines and springs which serve to move these mechanisms, its animal spirits to the water which drives them, of which the heart is the source and the brain’s cavities the water main.

Here’s my more modern, yet still hydraulic, take on temporal resolution of the visual brain:
Visual temporal resolution with water-works

At top is the stimulus, which consists of two patches. The top patch alternates between green and red, and the one below alternates between leftward-tilted and rightward-tilted. This image is projected onto the retina. The retina processes the stimulus somewhat before passing it on to the cortex (via the thalamus’ lateral geniculate nucleus), where one population of neurons determines the stimulus color, and another set of neurons determines the stimulus orientation. These processes have high temporal resolution, meaning that they determine color based on a relatively short interval of signals. This is why we can perceive colors correctly even when they are presented at a rapid rate, say 9 colors per second.

The resulting color signals ‘pour’ into the binding process, which has poor temporal resolution. A long interval of signals must accumulate before the process can compute the feature pairing. For the presentation rate depicted, the consequence of the long integration time is that multiple colors and orientations fall within an interval that is essentially simultaneous from the perspective of the binding process. The binding process cannot determine which color and orientation were presented at the same time. At this rate, we can perceive which colors were presented and which orientations were presented, but not the pairing between them (Holcombe & Cavanagh 2001). Click here to see this for yourself.

Large bucket = long interval of signals mixed together before the output is determined= poor temporal resolution = long integration interval = “slow” process.

But the last term in this equation can get us into trouble, because in everyday language the word ‘slow’ conflates temporal resolution with latency. My next post will add to the illustration in an attempt to make the distinction clear.

Rene Descartes (1664). Traite de l’Homme (Treatise of Man) Harvard University Press (1972)

Holcombe AO, & Cavanagh P (2001). Early binding of feature pairs for visual perception. Nature Neuroscience, 4 (2), 127-8 PMID: 11175871

[UPDATED 9 August 2011 to remove reference to non-existent movie link]

Perceptual speed limits unexplained and confused in the BBC’s “Invisible Worlds” (part 1)

The BBC has produced a wonderful series called Richard Hammond’s Invisible Worlds. It’s visually stunning and it’ll wow you with a lot of cool science. The first episode is called Speed Limits. Because I study speed limits on perception, I was very excited. 

To introduce the topic, Richard Hammond explains that vision is too slow to see many interesting things, things which can be revealed by high-speed imaging techniques. Indeed, watching the show I learned some fascinating facts about the motion of the wings of bumblebees and hawkmoths, and about shock waves, lightning, flowing water, rain drops, dolphin locomotion, bubble cavitation by pistol shrimps, Himalayan balsam seed shooting, fungal spores, and water striders.

Given that the whole show is based on the fact that the poor temporal resolution of human vision makes many aspects of the world invisible, I was expecting some halfway-decent explanation of the central concept of temporal resolution. Unfortunately, most of the statements in the show that bear on the topic of human perception are misleading or wrong. What made watching this especially painful for me was that the show’s staff actually consulted me for advice on the topic, to the extent that they sent me draft scripts. I dutifully used Track Changes to mark misleading statements and wrote fairly lengthy explanations of why more was needed, and spoke on the phone for several hours to a producer about doing more to explain human perceptual limits. All to no avail. Or at least, extremely little avail. Maybe it would have been worse without me..?

The confusion that lies behind the statements made in the show is not uncommon, and thus the show provides some teachable moments. It’s spurred me to take head on the common conflation of latency and temporal resolution. Maybe in the future, I’ll be able to prevent my students and maybe even future BBC producers from making the same mistakes. In a few following posts, I’ll get deep into ths. But for now, I’ll encourage you to watch one of the “Invisible Worlds” shows. Just don’t listen too carefully to the parts referring to human perception. Here’s some teaser text from the website:

. I mustn’t put my finger in a tank containing a pistol shrimp
Pistol shrimps are less than an inch long, but with an oversized claw, shaped like a boxing glove, they’re not to be messed with. In real time it looks like they see off opponents such as crabs by simply jabbing at them. But use high-speed cameras and you can tell something far stranger is going on. They win their fights without ever landing a punch. All their damage is done at a distance, as their closing claws force a jet of water to spurt out at close to 70 miles per hour, creating a low pressure ‘bubble’ in its wake. When this collapses, massive light, heat and energy are briefly created. Inside the bubble it momentarily reaches temperatures as hot as the surface of the sun, soaring to more than 4,000C. It’s this invisible force that causes much of the damage.
So the knockout punch comes from the bubble, not the claw.