Read It's a Jungle in There: How Competition and Cooperation in the Brain Shape the Mind Online
Authors: David A. Rosenbaum
We come, then, to an explanation that I think makes sense. Not surprisingly, the explanation relies on the idea that RTs are as long as they are because they reflect inhibition as well as excitation. Postulating inhibition provides the basis for saying why RTs are considerably longer than you’d expect if you merely considered the number of neurons operating at their normal speeds.
I haven’t explained why RTs have the particular values they do, only why they’re longer than expected. The reason they’re longer than expected, I believe, is that it’s a jungle in there. The time needed for any given S-R alternative to be expressed depends not just on that S-R alternative gathering activation for itself. It also depends on the S-R alternative inhibiting its competitors, not to mention overcoming whatever inhibition it had to endure as its competitors tried to suppress it. RTs are long, then, because of inner conflict. Were there none, we wouldn’t take as long as we do to make the decisions we do.
So far in this chapter I have said why I think inhibition as well as activation needs to be invoked as a basis for RT results. I haven’t stressed activation, since activation is hardly in doubt: To activate a response, activation (or excitation) is clearly needed. But to the extent that inhibition is in doubt, I want to give more reasons to identify it as a source of RT effects.
It turns out that a number of RT phenomena can be ascribed to inhibition. One of these is
negative
priming. Priming usually has a positive connotation. In cognitive psychology, that connotation is most famously expressed in the lexical decision task, where, as described earlier, you decide whether a stimulus is a word. It turns out that people are generally quicker to indicate that a stimulus is a word if it’s preceded by a semantically related word than if it’s preceded by a semantically unrelated word. The classic example is “doctor” priming “nurse.” The time to indicate that “nurse” is a word is shorter if “nurse” is preceded by “doctor” than if “nurse” is preceded by some semantically neutral word like “boat.” This
semantic priming
effect is due to automatic, unconscious facilitation.
25
Semantic priming is an instance of positive priming. The priming is positive because the prime helps speed processing, reflected in shorter RTs. Negative priming, by contrast,
slows
processing, causing RTs to increase. Negative priming is manifested when a stimulus that was previously ignored is now supposed to be noticed but isn’t noticed as easily as it would be otherwise. For example, if you’ve repeatedly reached for a red cup among other-colored cups, you’re able to reach for the red cup more and more quickly—an example of positive priming. But if you next need to reach for a blue cup in successive trials, you’ll get quicker at that, but, most critically for this discussion, if you next have to return to reaching for the
red
cup again, it will take you longer to reach for the red cup than if you hadn’t reached for it before. The red cup gets marked as something to be avoided.
26
Negative priming effects can also be obtained in word-naming tasks. Suppose you try to name the color of the ink in which words appear. This is the Stroop task, described earlier.
27
If you see the word RED in red ink, you’ll be faster to say “red” than if you see the word BLUE in red ink. This benefit illustrates positive priming. You’re quick because, as a skilled reader, you can automatically say “red” in response to seeing RED or “blue” in response to seeing BLUE. But if BLUE appears in red ink and you’re still supposed to say “red,” you must suppress your urge to say “blue” and your RT grows. Whatever active suppression or inhibition that you apply to your automatic response carries forward to the next trial. If in the next trial you see the word BLUE in blue ink, your RT becomes longer than it was before, when you hadn’t previously suppressed your BLUE-“blue” production.
28
Such negative priming effect is consistent with the view that you inhibited the “blue” response, making it hard to say that word if it was needed next.
Another RT phenomenon that is likely to reflect inhibition is the slowing of RTs in task switching. When you perform a task, you follow a procedure. For example, if you indicate, over and over again, whether each of a series of numbers exceeds some target value, you follow a different procedure than if you indicate whether each of a series of numbers is odd or even. It turns out that you’re slower to use a procedure if you just used a different procedure than if you use a procedure that’s the same as the one just used. The slowing associated with task switching has been ascribed to inhibition of the just-used, but now unused, procedure.
29
In the discussion so far, I’ve suggested that to understand RT effects, you need to invoke inhibition as well as excitation. Neuroscientists won’t be surprised by this claim because, in the nervous system, inhibition and excitation are pervasive. Neuroscientists would say there’s no reason why only excitation or only inhibition should influence RTs. Given this fact of physiology, it’s remarkable how strongly some cognitive psychologists have argued that inhibition doesn’t underlie RT effects.
30
The zeal with which they’ve argued this point has surprised me because I don’t understand what larger theoretical or practical issue is at stake. Nonetheless, I’ve taken their position seriously enough to devote a fair amount of space to the merits of inhibition in the analysis of RTs. Had there been an equally vociferous anti-excitation school, I suppose I might have devoted as much space to the defense of excitation.
My aim in endorsing inhibition (as well as excitation) as a basis for RT effects has been to emphasize the competitive basis for perception and performance, as indexed by RTs. To confirm that my aim is to endorse both excitation and inhibition, let me turn briefly to another phenomenon that can be explained perfectly well without appealing to inhibition. Interestingly, it is a phenomenon that can be explained by appealing to an inner “horse race.”
The phenomenon is
redundancy gain
. It arises in visual search tasks when people look for one or more targets, typically in a field of one or more distracters. The speed with which participants can find targets is of practical interest because there are many real-world situations where finding visual objects is important. Spotting planes at air-traffic control centers is an example, as is spotting guns at airport security checkpoints. Redundancy gains appear when two targets are present, not just one. When two targets are present, detection times are usually shorter than when only one target is present.
To explain redundancy gain from the perspective of the jungle principle, you might be tempted to say that redundant targets form a kind of wrestling tag team. They gang up on their opponents, joining forces to quash the other targets’ aspirations. That’s one possible model, or one possible metaphor for a model that could be expressed with greater precision. Unfortunately for this model, it turns out that when it’s compared to another model that does not use inhibition, the inhibition-free model does better.
The relevant research was done by Rolf Ulrich of Tubingen University (in Germany) and Jeff Miller of Otago University (in New Zealand). They showed that an inhibitory account doesn’t accurately predict redundancy gain, whereas an inhibition-free “horse-race” model does.
31
Their idea was that each target’s “horse” races to the finish as quickly as possible at a rate that’s independent of the other target. Redundancy gain, they showed, could be understood simply by recognizing that when there are two horses, the chance that either one finishes below some short time is higher than when there is only one horse. No “warfare” is needed.
What’s the larger point? As I said earlier in this section, not all RT effects need to be ascribed to competition. By extension, it would be a mistake to say that all phenomena that seem to embody competition or inhibition necessarily do. Sometimes, activation alone suffices, as when all the horses in a race get as activated as they can and run as quickly as they can. The more horses there are, the greater the chance that one of them reaches the finish line within a given period. This view doesn’t deny competition, but it indicates that inhibition needn’t always be invoked.
32
Is there a problem with embracing an activation-only model for some tasks while embracing an activation-plus-inhibition model for others? I don’t think so. Redundancy gain arises when multiple targets are available for inspection. The other phenomena I’ve discussed are ones in which only one target was visible at a time and the other possible targets either didn’t light up (in the case of lights atop buttons) or lit up only metaphorically, in the participant’s mind. This difference in procedure helps explain the difference in outcomes.
A second reason not to be concerned about the difference in the results is that in all the cases I’ve considered, competition appeared to be the driving force behind the observed empirical effects. Whether in the search for a target in memory or in the search for a target in a display, the factor that emerged was rivalry among relevant elements. If the elements compete by racing against each other or inhibiting each other, that difference doesn’t really matter from the perspective of establishing the broader principle of interest, which is that it’s a jungle in there.
Third and finally, it turns out that the most successful recent attempt at theorizing about all RT data relies heavily on competition. This fact is remarkable considering that the most successful theory of attention was competition-based as well. Recall that I ended the last chapter by saying that the most successful, all-encompassing theory of attention was the biased competition theory of attention of Robert Desimone and John Duncan.
33
As we approach the end of this chapter on RTs, I can again point to a competition-based theory as the one that seems to provide the best overall account of the phenomena to be explained.
The theory to which I refer was advanced by Marius Usher and James McClelland.
34
I won’t review their theory here because doing so would plunge us into more technical detail than is needed. Suffice it to say that Usher and McClelland’s theory—the
leaky competing accumulator
model—is similar in spirit to what I’ve argued in this chapter. The main claim is that internal elements vie for selection and accumulate activation depending on how much evidence comes in for them, though the activation can leak, as would be expected for a biological system that has imperfect storage. The sweep of the theory is impressive, as is the precision of its predictions vis-à-vis obtained results.
Usher and McClelland’s paper doesn’t refer to jungles
per se
, but their claims are consistent with the jungle view. What I should add, given my tremendous respect for these authors (especially McClelland, who is an eminent researcher in cognitive psychology), is that the broad, discursive picture I have offered here is consistent with the model that McClelland and Usher introduced. Such consistency is hardly coincidental.
35
You can find jungles in the ocean as well as on land. If you swim in one of these aquatic arenas, you’d better watch your back, not to mention your front, sides, top, and bottom. If you’re a horseshoe crab trying to survive in such a wet world, you need to detect looming predators. An attacker from above could land you on its dinner plate.
Why do I refer to horseshoe crabs at the start of this chapter on perception, for that’s what this chapter’s about? The reason is that one of the most important principles concerning the neural basis for perception came from research on horseshoe crabs. The two biologists who conducted their research on these invertebrates won a Nobel Prize for their work.
1
The feature of horseshoe crabs that attracted the researchers to these creatures was what attracts many people to creatures they find enchanting: their eyes. In the case of horseshoe crabs, the creatures’ eyes aren’t particularly beautiful, at least to us humans. Instead, the eyes are plentiful. Horseshoe crabs have slews of tiny, light-sensitive receptors atop their heads. These ensembles of mini-eyes are like the photoreceptors of mammalian retinas, but they’re much larger and more accessible, making them attractive for study by physiologists.
The two physiologists who studied vision in the horseshoe crab reasoned that the way this creature’s light-sensitive organs process light might shed light on the way human photoreceptors work. In taking this approach, the scientists gave a thumbs-up to a rule-of-thumb among biologists: Nature’s solutions to physical problems get replicated in different species. What works for one species tends to work for others. This principle is congenial with Darwin’s theory of natural selection.
Pursuing this line of thinking, the researchers used electrodes to record from the horseshoe crab’s photoreceptors while the scientists projected different light patterns onto the receptors. What the scientists found was surprising. There was more neural activity for photoreceptors in the light than in the dark, as expected, but there was also an exaggerated response at the
edges, which was more surprising. Where light and dark met, the response of the photoreceptors was especially strong.
Hypersensitivity to edges makes sense in hindsight. An edge can signal the boundary of a solid object, such as a hungry shark hovering above. The midsection of a dark or light area carries less information.
Being aware of life-threatening changes in the environment is one reason to speak of the visual system of the horseshoe crab, but it’s not the only one. This creature’s visual system also shows that it’s a jungle in there as far as vision is concerned, and here’s why.
The horseshoe crab’s photoreceptors compete for access to neural units beneath them—to
ganglion
cells, as they’re called. Each photoreceptor sends excitatory signals to the ganglion cell below. At the same time, each photoreceptor sends inhibitory signals to the ganglion cell to either side. The strength of these signals is proportional to the light energy the photoreceptor receives. The output of each ganglion cell is proportional to the sum of its inputs. Some of the inputs are positive, coming from the “friendly” photoreceptor just above. The other inputs are negative, coming from nearby, “unfriendly” photoreceptors. The exaggerated response of the ganglion cells near the shadow’s edge reflects the differences between the positive and negative inputs.