Color space pictured and animated (Derrington Krauskopf Lennie)

The Derrington, Krauskopf and Lennie (1984) color space is based on the Macleod-Boynton (1979) chromaticity diagram. Colors are represented in 3 dimensions using spherical coordinates that specify the elevation from the isoluminant plane, the azimuth (the hue) and the contrast (as a fraction of the maximal modulations along the cardinal axes of the space).

It’s easier for me to think of a color in cartesian DKL coordinates with the dimensions:

  • Luminance or L+M, sum of L and M cone response
  • L-M, difference of L and M cone response
  • S-(L+M), S cone responses minus sum of L and M cone response

The three classes of cones respond a bit to almost all colors, but some reds excite L cones the most, some greens M cones the most, and some blues S cones the most.

I’ve created the below movie (thanks Jon Peirce and PsychoPy) to show successive equiluminant slices of DKL color space, plotted with cartesian coordinates. These render correctly on my CRT screen, but the colors will be distorted on any other screen. Nevertheless it helps you get a feel for the gamut (colors that can be represented) of a typical CRT at each luminance, where -1 is the minimum luminance of the CRT and +1 is its maximum. The letters R,G,B and accompanying numbers show the coordinates of the phosphors (each gun turned on by itself).

Derrington AM, Krauskopf J, & Lennie P (1984). Chromatic mechanisms in lateral geniculate nucleus of macaque. The Journal of physiology, 357, 241-65 PMID: 6512691

MacLeod DI, & Boynton RM (1979). Chromaticity diagram showing cone excitation by stimuli of equal luminance. Journal of the Optical Society of America, 69 (8), 1183-6 PMID: 490231

Position available: postdoc, or highly-qualified research assistant

We invite applications for a research fellowship/postdoctoral research fellowship working with Dr. Alex Holcombe in the School of Psychology at the University of Sydney. The research area is visual psychophysics, and the project involves the perception and attentive tracking of moving objects. One line of experiments will investigate the limits on judging the spatial relationship of moving objects. See http://www.psych.usyd.edu.au/staff/alexh/ for more on the laboratory’s research.

University of Sydney

University of Sydney main quadrangle

The University of Sydney is Australia’s first university with an outstanding global reputation for academic and research excellence. The School of Psychology is Australia’s first established psychology department and boasts a proud history of excellence that characterises the entirety of its research and educational activities. You would be working with a dynamic community of local vision researchers (see http://www.physiol.usyd.edu.au/span/  ) and attend seminars and colloquia in perception and related fields.

An essential requirement for the Postdoctoral Research Associate position is a PhD in psychology, vision science, or similar field, and a demonstrated ability to conduct vision research (BSc Honours or equivalent if at the Research Associate level). An understanding of psychophysical and psychology experiment design is also essential. Conducting the experiments will require skill in programming visual perception experiments and experience analysing data from psychophysical and/or psychology experiments, using a command-line tool (as opposed to Excel or SPSS) such as R, MATLAB, or program code in Python, C, etc. Preference may be given to individuals with experience with PsychoPy and R.

The position is full-time fixed term for 16 months subject to the completion of a satisfactory probation period for new appointees. There is the possibility of further offers of employment of up to 12 months, subject to funding and need. Membership of a University approved superannuation scheme is a condition of employment for new appointees.

Remuneration package: up to $92k p.a. (currently ~$90k USD), consisting of a base salary, leave loading and up to 17% employer’s contribution to superannuation). Some assistance towards relocation cost and visa sponsorship may be available for the successful appointee if required. Level of appointment will be commensurate with experience and qualifications.

All applications must be submitted via The University of Sydney careers website. Visit http://sydney.edu.au/positions/ and search by the reference number,  3856/1110 3846/1110,  for more information and to apply.

CLOSING DATE: 13 January 2011 (11:30PM Sydney time) with interview to be scheduled in the beginning of February.
The University is an Equal Opportunity employer committed to equity, diversity and social inclusion. Applications from equity target groups and women are encouraged. The University reserves the right not to proceed with any appointment.

UPDATE: human resources here gave me the wrong reference number, which has now been corrected above

An interactive turnkey tutorial to give students a feel for brain-style computation

A new version of my 100-minute interactive neural network lesson is available. The lesson webpages guide university-level students through learning and directed play with a connectionist simulator. The outcome is that students gain a sense of how neuron-like processing units can mediate adaptive behavior and memory.

Making the lesson was fairly straightforward, thanks to Simbrain, a connectionist simulator which is the easiest to use of any I’ve seen. After a student downloads the Java-based Simbrain software and double-clicks, she is on her way with menu-driven and point-and-click brain hacking.

New to Simbrain and my lessons this year are animated avatars that depict the movements of the virtual organism controlled by the student’s neural network. This feature, added to the software by Simbrain creator Jeff Yoshimi, provided a nice lift to student engagement compared to previous versions. The first lesson is mainly intended to bring students to understand the basic sum-and-activate functioning of a simplified neuron and how wiring such units together can accomplish various things. It’s couched in terms of a guy chasing girls scenario, so that most university students can relate. The second lesson gives students a rudimentary understanding of pattern learning and retrieval with a Hebbian learning rule.

Going through the lessons is certainly not as fun as playing a video game, but it is more interesting and interactive than a conventional university tutorial. In the long term, I hope we can make it more and more like a (educational) video game. Some say that’s the future of education, and whether or not that’s true in general, I think neural networks and computational neuroscience are ideally suited for it.

Development of the Simbrain codebase has been gathering steam, although that may not be apparent from the Simbrain website because portions of it haven’t been updated for awhile. Lately, a few people have jumped in to help with development.

Already the code is developed enough to provide all the functionality needed for school- and university-level teaching. Giving a series of classes with it could easily be done at this point. However, to do so you’d have to take time to develop associated lecture material and refine the example networks. If you don’t have time for that, you should consider simply using my lessons to give students a taste.

The lessons have been battle-tested on a few hundred psychology students at University of Sydney, without any significant problems. The tutors (aka demonstrators or teaching assistants) took the introductory slides provided and used them to help orient the students before they started following the web-based instructions for the lessons. Contact us if you want to take it further.

technical note: d-prime proportion correct in choice experiments (signal detection theory)

If you don’t understand the title of this post, you almost certainly will regret reading further.

We’re doing an experiment in which one target is presented along with m distracters. The participant tries to determine which is the target, and must respond with their best guess regarding which is it. Together the m distracters + 1 target = “number of alternatives”.

In the plots shown are the predictions from vanilla signal detection theory for the relationship between probability correct, d-prime, and number of alternatives. Each distracter is assumed to have a discriminability of d-prime from the target.

signal detection theory relationship among percent correct, d-prime, number of alternatives
The two plots are essentially the inverse of each other.

Note that many studies use two-interval forced choice wherein the basic stimulus containing distracters are presented twice, one with the signal and the participants has to choose which contained the signal. In contrast, here I’m showing predictions for an experiment wherein the target with all its distracters is only presented once, and the participant reports which location contained the target.

I should probably add a lapse rate to these models, and generate curves using a reasonable lapse rate like .01.

I’ll post the R code using ggplot that I made to generate these later; email me if I don’t or you want it now. UPDATE: the code, including a parameter for lapse rate.

reference: Hacker, M. J., & Ratcliff, R. (1979). A revised table for d’ for M-alternative forced choice, 26(2), 168-170.
#To determine the probability of target winning, A, use the law of total probability:
# p(A) = Sum (p(A|B)p(B)) over all B
# Here, B will be all possible target TTC estimates and p(A|B) will be probability distracters
# are all lower than that target TTC estimate, B

# x is TTC estimate of distracter
# Probability that distracter TTC estimate less than target is pnorm(x): area under curve
# less than x.
# m: number of objects, m-1 of which are distracters
# p(A|B)*p(B) = pnorm(x)^(m-1) * dnorm(x-dprime)
# Hacker & Ratcliff, 1979 and Eliot 1964 derive this as did I
# Jakel & Wichmann say that "numerous assumptions necessary for mAFC" where m>2 but not clear
# whether talking about bias only or also about d'

Show me the data! A step backwards by the Journal of Neuroscience

More and more researchers agree that more access is needed to the original data behind published research articles. Of course, the more general point is that not just the original data, but all materials needed to scrutinize the claims of a manuscript should be available.

The policy of the journal Science is:

After publication, all data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science… Large data sets with no appropriate approved repository must be housed as supporting online material at Science, or only when this is not possible, on an archived institutional Web site, provided a copy of the data is held in escrow at Science to ensure availability to readers.

There are many cases where aspects of the data cannot be made available, for example due to patient privacy requirements, and Science of course makes allowances for that.

But In contrast to Science‘s enlightened policy, the Journal of Neuroscience looks to have just taken a step backward. They announced that they will no longer allow any supplemental material to be submitted along with the main text of authors’ articles.

Supplemental materials were being used to include a broad array of things that help to interpret the content of an article. These things are really needed to increase the transparency of science. Still, I do sympathize a bit with the desire of J Neurosci to do away with them. Often they are used in a very annoying fashion. Authors sometimes put critical data analyses and experiments in the supplemental material, and as a reader it becomes really difficult to read an article when one has to switch between the often overly-concise main text and the sometimes poorly organized supplemental material. J Neurosci points out that evaluating these materials was often a significant burden on the reviewers. However, I don’t think that simply eliminating supplemental materials is an appropriate response.

Elimination of supplemental materials should be accompanied by a clear policy on how the information otherwise therein will be made available to readers, to ensure the integrity of science. The announcement does make a gesture in that direction, saying that authors should provide a weblink to information on their own site, and that perhaps the elimination of supplemental material will “motivate more scientific communities to create repositories for specific types of structured data, which are vastly superior to supplemental material as a mechanism for disseminating data.” However, overall the announcement gives the impression that if anything, the recommendation that authors provide supporting data and material has been relaxed. Heather Piwowar has more.

J Neurosci should have accompanied this change with a statement that the expectation that authors fully back up their claims with public information is increasing, not decreasing!

In the longer term, I believe the solution to all this may come from more fundamental reform of how science is communicated. Ideally, the workings of a scientific enterprise and the progress being made should be visible even before formal publication. This is called Open science.

The development of better open science tools will obviate some of the concerns of the Journal of Neuroscience. Formal publication of an article will not be as big a step as it is now, where suddenly all of the materials associated with a scientific claim appear out of nowhere. Instead, there will be an electronic paper trail linking back to the data records created as the data were collected and the analyses were done. There are many reasons why currently scientists do not or cannot do such things, but those who can and are making the effort to do so should be applauded and supported.

Explaining temporal resolution with water-works of the visual system

Most people are confused about temporal resolution. That includes my students. So I created this diagram to communicate the basic concept, with the example of human visual processing, using a water-works metaphor.

Why water-works? I’m trying to explain an unfamiliar concept in terms that everyone can understand intuitively. By using a hydraulic metaphor for the nervous system, I’m following in the footsteps of Descartes, who in the 1600s knew almost nothing about the brain or even nerves but nevertheless had a pretty good notion of what was going on:

And truly one can well compare the nerves of the machine that I am describing to the tubes of the mechanisms of these fountains, its muscles and tendons to diverse other engines and springs which serve to move these mechanisms, its animal spirits to the water which drives them, of which the heart is the source and the brain’s cavities the water main.

Here’s my more modern, yet still hydraulic, take on temporal resolution of the visual brain.

First, notice that in this display, you can identify the two colors, and identify the two tilts of the contour, but not their pairing.

difftLocatn8fps2008Converted2014withEdgeArtifact

That is, it’s very difficult to determine whether the leftward tilt is paired with red or with green. Why? Let’s consider how that gets processed by the brain.

TemporalResolutionWaterWorks

At top is the stimulus, which consists of two patches. The top patch alternates between green and red, and the one below alternates between leftward-tilted and rightward-tilted. This image is projected onto the retina. The retina processes the stimulus somewhat before passing it on to the cortex (via the thalamus’ lateral geniculate nucleus), where one population of neurons determines the stimulus color, and another set of neurons determines the stimulus orientation. These processes have high temporal resolution, meaning that they determine color based on a relatively short interval of signals. This is why we can perceive colors correctly even when they are presented at a rapid rate, say 9 colors per second.

The resulting color signals ‘pour’ into the binding process, which has poor temporal resolution. A long interval of signals must accumulate before the process can compute the feature pairing. For the presentation rate depicted, the consequence of the long integration time is that multiple colors and orientations fall within an interval that is essentially simultaneous from the perspective of the binding process. The binding process cannot determine which color and orientation were presented at the same time. At this rate, we can perceive which colors were presented and which orientations were presented, but not the pairing between them (Holcombe & Cavanagh 2001). Click here to see this for yourself.

Large bucket = long interval of signals mixed together before the output is determined= poor temporal resolution = long integration interval = “slow” process.

But the last term in this equation can get us into trouble, because in everyday language the word ‘slow’ conflates temporal resolution with latency. My next post will add to the illustration in an attempt to make the distinction clear.

Rene Descartes (1664). Traite de l’Homme (Treatise of Man) Harvard University Press (1972)

Holcombe AO, & Cavanagh P (2001). Early binding of feature pairs for visual perception. Nature Neuroscience, 4 (2), 127-8 PMID: 11175871

Perceptual speed limits unexplained and confused in the BBC’s “Invisible Worlds” (part 1)

The BBC has produced a wonderful series called Richard Hammond’s Invisible Worlds. It’s visually stunning and it’ll wow you with a lot of cool science. The first episode is called Speed Limits. Because I study speed limits on perception, I was very excited. 

To introduce the topic, Richard Hammond explains that vision is too slow to see many interesting things, things which can be revealed by high-speed imaging techniques. Indeed, watching the show I learned some fascinating facts about the motion of the wings of bumblebees and hawkmoths, and about shock waves, lightning, flowing water, rain drops, dolphin locomotion, bubble cavitation by pistol shrimps, Himalayan balsam seed shooting, fungal spores, and water striders.

Given that the whole show is based on the fact that the poor temporal resolution of human vision makes many aspects of the world invisible, I was expecting some halfway-decent explanation of the central concept of temporal resolution. Unfortunately, most of the statements in the show that bear on the topic of human perception are misleading or wrong. What made watching this especially painful for me was that the show’s staff actually consulted me for advice on the topic, to the extent that they sent me draft scripts. I dutifully used Track Changes to mark misleading statements and wrote fairly lengthy explanations of why more was needed, and spoke on the phone for several hours to a producer about doing more to explain human perceptual limits. All to no avail. Or at least, extremely little avail. Maybe it would have been worse without me..?

The confusion that lies behind the statements made in the show is not uncommon, and thus the show provides some teachable moments. It’s spurred me to take head on the common conflation of latency and temporal resolution. Maybe in the future, I’ll be able to prevent my students and maybe even future BBC producers from making the same mistakes. In a few following posts, I’ll get deep into ths. But for now, I’ll encourage you to watch one of the “Invisible Worlds” shows. Just don’t listen too carefully to the parts referring to human perception. Here’s some teaser text from the website:

. I mustn’t put my finger in a tank containing a pistol shrimp
Pistol shrimps are less than an inch long, but with an oversized claw, shaped like a boxing glove, they’re not to be messed with. In real time it looks like they see off opponents such as crabs by simply jabbing at them. But use high-speed cameras and you can tell something far stranger is going on. They win their fights without ever landing a punch. All their damage is done at a distance, as their closing claws force a jet of water to spurt out at close to 70 miles per hour, creating a low pressure ‘bubble’ in its wake. When this collapses, massive light, heat and energy are briefly created. Inside the bubble it momentarily reaches temperatures as hot as the surface of the sun, soaring to more than 4,000C. It’s this invisible force that causes much of the damage.
So the knockout punch comes from the bubble, not the claw.

cracks in the edifice of visual time?

Below is a draft of a chapter I’m writing for Subjective Time, an upcoming book from MIT Press edited by Valtteri Arstila and Dan Lloyd.

In a bowling alley, a professional player launches his ball down the lane. As the ball rolls toward the pins, our visual experience of it is smooth and seamless. The ball shifts in position continuously, and this seems to be represented with high fidelity by our brain. There are no subjective gaps, no stutter, and no noticeable blur.

One might assume that, in every instant, the brain simply processes the retinal input through various feature and shape detectors, with the results becoming available to awareness, millisecond by millisecond. This picture of a continuous system, with information continually ascending the system before being replaced by the information from the next instant, is still the predominant way that psychologists and perceptionists think about the visual brain.

However, the results of experiments have steadily chipped away at this image.  Together, these findings indicate that the smooth, seemingly high temporal resolution movie we experience during the roll of the bowling ball reflects a massive construction project by the brain. Many problems of ambiguous input are resolved, processing artifacts like blur are suppressed, and missing information is guessed at. Some of the more firmly established of these processes will be described at the end of this chapter. Unfortunately, there is no agreement on the extent to which these complexities should push us to revise our simple framework of millisecond-by-millisecond processing. Here, we will focus on phenomena that are more clearly at odds with the standard view. The bulk of this chapter is devoted to one particular way in which the processes that underly our visual experience have been proposed to not resemble experience itself.

Visual experience over time feels seamless and undifferentiated. Following the bowling ball, we are aware of no frame rate, no intermittent updates as new information occasionally becomes available. Nevertheless, over the last few decades it has been suggested that behind the scenes, the visual system embodies processes that fluctuate up and down, according to a regular rhythm. On this view, visual information is processed somewhat intermittently, or at least certain critical operations only occur on an occasional schedule.

While this proposal is surprising from the perspective of visual experience, perhaps it should not surprise those familiar with the habits of the brain.

Continue Reading

the rise of neuroscience

So I knew neuroscience has exploded over the last few decades, but I didn’t know its emergence as a more autonomous discipline is “the biggest structural change in scientific citation patterns over the past decade”. In the authors’ words that follow, they are referring to their figure showing neuroscience emerging as a new citation macro-cluster:

“We also highlight the biggest structural change in scientific citation patterns over the past decade: the transformation of neuroscience from interdisciplinary specialty to a mature and stand-alone discipline, comparable to physics or chemistry, economics or law, molecular biology or medicine. In 2001, 102 neuroscience journals, lead by the Journal of Neuroscience, Neuron, and Nature Neuroscience, are assigned with statistical significance to the field of molecular and cell biology (dark orange, 84 of 102 journals are assigned significantly). Further, Brain, Behavior, and Immunity, Journal of Geriatric Psychiatry and Neurology, Psychophysiology, and 33 other journals appear with statistical insignificance in psychology (green, 6 of 36 journals are assigned significantly) and Neurology, Annals of Neurology, Stroke and 77 other journals appear with statistical significance in neurology (blue, 75 of 80 journals are assigned significantly). In 2003, many of these journals remain in molecular and cell biology, but their assignment to this field is no longer significant (light orange, 5 of 102 journals are assigned significantly). The transformation is underway. In 2005, neuroscience first emerges as an independent discipline (red). The journals from molecular biology split off completely from their former field and have merged with neurology and a subset of psychology into the significantly stand-alone field of neuroscience. (In 2006, shown in Fig. S2, the structure reverts to a pattern similar to 2003.) In their citation behavior, neuroscientists have finally cleaved from their traditional disciplines and united to form what is now the fifth largest field in the sciences (after molecular and cell biology, physics, chemistry, and medicine). Although this interdisciplinary integration has been ongoing since the 1950s [17], only in the last decade has this change come to dominate the citation structure of the field and overwhelm the intellectual ties along traditional departmental lines.”

Rosvall, M., & Bergstrom, C. (2010). Mapping Change in Large Networks PLoS ONE, 5 (1) DOI: 10.1371/journal.pone.0008694

optimizing your coffee consumption

We live in an era where students, shift workers, and scientists increasingly consume drugs that modify brain activity in order to enhance cognition. Ethicists are right to fret about this as the number of addictive substances with some ill effects proliferates (DeJong et al. 2008). People will use these things regardless whether or not some condemn the phenomenon, so it is important that information is out there about how best to use them.

Caffeine is probably the most widely-used drug for enhancing cognition and productivity. However despite its long history, I have not been able to find a good manual or user’s guide! By a manual, I just mean a description of on what kind of schedule it is best used, given caffeine’s tolerance profile, acute effects, withdrawal symptoms, etc. Here I’ll report a few things I found in the scientific literature, in relation to my own experience.

When I first drank coffee, the effects were perhaps too strong to help me much, because I got some ‘jitters’ and had trouble focussing. But as I gained a bit of tolerance to caffeine’s effects, the jitters faded and the arousal effect became milder but more conducive to productivity. This tendency has in fact been reported in the scientific literature, as a rapid tolerance selective to some negative effects even while positive effects can continue (Evans & Griffiths 1992; Schuh & Griffiths 1997). However after many months of judicious usage during which an afternoon coffee was effective in heightening and prolonging my workday productivity, I gradually became a daily user. After approximately a year of this, my tolerance to the arousal effects became great enough that I needed a daily coffee simply to feel normal. It still provided a boost, but only to what a year ago I would have considered baseline. In contrast to this important slow rise in tolerance, the academic literature focuses on the very rapid increase in tolerance during the first several days of caffeine consumption. Usually this is measured by decrease in effect of caffeine on blood pressure elevation. Not very useful for understanding how to best enhance cognition.

My situation, in which caffeine no longer had its productivity-boosting effects, must be a very common problem. To solve it, one might either increase one’s dosage, or try to regain the original effects by going off caffeine for a while. I decided to try the latter.

Arvind says that science indicates one can restore complete sensitivity to caffeine after only 5 days of abstinence (or 10 days of gradual abstinence), however I haven’t been able to find a study that documents this. A blood pressure study estimates that only 20 hours of abstinence (Shi et al. 1993) will restore total sensitivity to caffeine on the blood pressure response. But the subjective withdrawal effects don’t peak until nearly 48 hours of absence! Apparently, for different caffeine effects, different amounts of time are required to restore sensitivity. So what about the positive subjective and arousal effects the average person is most interested in?

I decided to go nearly cold turkey for 7 days, with only one or two decafs in that interval to blunt my withdrawal-effect blues. Fortunately, I had only mild headaches, but did have significant lethargy and loss of mental focus. After seven days, I think I have regained most of my caffeine sensitivity. But I’m only on day one of using again, so not certain how close I am to the sensitivity I had 6 months or a year ago. I hope to share Arvind’s experience of increased productivity for a long period before needing to abstain again to restore sensitivity.

Are there any scientific papers on the topic, or lacking that, further personal reports to certify that this works? I worry about chronic tolerance effects that might not dissipate even after prolonged abstinence, but haven’t seen a shred of relevant science. To bring us closer to having a real user’s manual for both caffeine and other cognitive enhancers, those already using should report the results of their self-experimentation!

DEJONGH, R., BOLT, I., SCHERMER, M., & OLIVIER, B. (2008). Botox for the brain: enhancement of cognition, mood and pro-social behavior and blunting of unwanted memories Neuroscience & Biobehavioral Reviews, 32 (4), 760-776 DOI: 10.1016/j.neubiorev.2007.12.001

Shi J, Benowitz NL, Denaro CP, & Sheiner LB (1993). Pharmacokinetic-pharmacodynamic modeling of caffeine: tolerance to pressor effects. Clinical pharmacology and therapeutics, 53 (1), 6-14 PMID: 8422743

Update: this post provides related info in the same spirit.