Read It's a Jungle in There: How Competition and Cooperation in the Brain Shape the Mind Online
Authors: David A. Rosenbaum
2. If a light appears on the right, press a button with your right hand.
3. If a tone sounds, press a foot pedal.
4. Carry out each action as quickly and as accurately as possible.
In the actual experiment, the tone (for the foot) is often presented very soon after one of the lights comes on (either for the left hand or right). In conditions of special interest, the tone may come on so soon after the light appears that it actually comes on before the hand response is made. The interesting result is that the second (foot) response is delayed by an especially long time in this circumstance, as if the second (foot) response cannot be selected until the first (hand) response has been.
There is no
a priori
reason why this should be. A foot and a hand can move together, as is obvious from watching a drummer pound a bass drum with her foot while she taps a cymbal with her hand. Her hands and feet can move at the same time.
According to Pashler, the reason for the delayed foot response in the light-then-tone experiment is that the foot response can’t be selected until the hand response has been. If one response is selected first, the second selection must wait its turn.
15
One way you can see attention playing out in action selection is in people’s looking behavior. It’s not uncommon to see heads turning synchronously when a bunch of oglers notices an attractive passerby. Outside of group settings, eye movements of individuals acting alone provide a window to attentional states. By recording where individuals look, researchers have studied what captures people’s attention and, therefore, how attention works.
An effective way to show where people look is to place dots on the places they gazed at. Plunking dots where the observers’ eyes came to rest on inspected stimuli shows where the fovea (the part of the retina used for object
perception) settled to get sustained exposure to regions of interest. Between those visual fixations, the eyes jumped, or made
saccades
, the French word for “jolts” or “jerks.”
Some of the earliest imposed-dot pictures came from a Russian physiologist named Alfred Yarbus, who used primitive but scientifically path-breaking equipment to learn how people look at pictures. He found that people inspect pictures in nonrandom ways.
16
When they were shown a picture of a girl, for instance, they tended to look at the girl’s eyes and mouth. They looked much less at her chin, ears, or hair. Yarbus showed that where observers looked depended on what they were looking for. The same picture attracted different gaze patterns depending on what question observers were trying to answer.
If you look where you attend, does that mean you attend only where you look? Can you attend to other places as well? What internal processes cause you to have your attention drawn to a given location? Is attraction always drawn to
locations
? And what, if anything, does the jungle principle have to say about these matters?
Moving your eyes from place to place hardly seems like a series of victories, yet is it. For your eyes to jump to some location, internal processes must pull your eyes from where they are to some other spot. There are many contenders for where you look. The number of places to which you can direct your gaze is effectively infinite. An inner agent pulling for one of those places has much at stake for directing gaze to that site. Like a business that bellies up if it’s unnoticed, a place that rarely gets a glance may lose its claim to neural real estate.
Vision scientists have conducted many studies of the factors that predict looking behavior. These findings are interesting not only for students of vision and the brain but also for people with practical concerns. Advertisers want people to look at their wares. Educators want students to look at their texts. Where people look can affect what they buy and what they study.
In vision laboratories, it has been found that bright, unexpected stimuli tend to attract attention. So pronounced is this effect that the tendency to look at such stimuli is considered a
reflex
—the so-called
orienting
reflex. The orienting reflex doesn’t just include turning the
eyes
toward alerting stimuli. The head and entire body can point in that direction. As a result, if you’re Ernesto and your name is mentioned at a cocktail party or in a classroom demonstration, you may turn your torso toward the person who names you.
A reflex is a response that is automatic and tightly coupled to its trigger stimulus. A bright spot of light coming on at an unexpected location draws visual attention to that location, not somewhere else. The ensuing response occurs very quickly, as if the neural gnomes governing the orienting reflex are
kings and queens of the jungle. Their royal status, measured by the immediacy of the responses they trigger, is derived from their privileged access to the system responsible for generating saccades, which in turn carry out the precious function of exposing the brain to selected stimuli.
17
Even saccade latencies can benefit from attention, as shown in experiments by Michael Posner and his colleagues at the University of Oregon.
18
Posner asked university students to perform a simple task. “When you see the second stimulus in each experimental trial,” Posner said, “press a key as quickly as you can.”
19
Posner knew the second stimulus would be a dot that would appear on the left or right on a computer screen. He also knew that the first stimulus could be an arrow pointing to the left or right, or a double-sided arrow, the kind that points two ways at once. Any of these three stimuli could appear in the center of the screen. The target dot that followed then appeared on the left or right.
The interesting feature of this experiment was that when Posner showed a
left
-pointing arrow, the next dot he usually showed was on the
left
, and when he showed a
right
-pointing arrow, the next dot he usually showed was on the
right
. However, on some occasions, after Posner showed an arrow pointing one way or the other, he (or his computer) showed a dot on the other side of the screen. The result was that detection latencies were longer in the invalid precue condition, intermediate in the neutral precue condition, and shortest in the valid precue condition. So participants were reliably fastest to press the “now-I-see-it” key when the arrow correctly informed them where the target dot would appear. They were reliably slowest when the arrow
incorrectly
informed them where the target dot would appear. And they were reliably in between when the precue was uninformative (when the precue was a two-sided arrow).
What accounts for this pattern of results? Posner argued that attention could be voluntarily shifted toward the cue location based on the precue. If the likelihood of the precue was high enough (80%), participants could direct their attention to the validly precued site, which led to short detection latencies. However, if the cue appeared on the
opposite
side of the screen, the detection time was much longer—longer even than in the control condition, when the precue was uninformative. Evidently, participants could take advantage of the precue and, as a result, abbreviate their detection times. This outcome is remarkable considering how short the detection times were to begin with—about a quarter of a second.
Was this effect just a result of eye movements? Did people in the experiment simply look to the right if the arrow pointed to the right, or look to
the left if the arrow pointed to the left? Looking in the appropriate location would have made it easier to detect the stimulus that happened to appear there, but this might not tell you about attention
per se
—only that you can see something more easily (or report it more quickly) if you’re looking at it, which is not very surprising.
Posner showed that the valid precue advantage wasn’t just due to anticipatory eye movements. When he recorded his subjects’ eye positions, he obtained shorter detection times for correctly precued stimuli even if the eyes pointed straight ahead after the precue arrow appeared. Similarly, he obtained longer detection times for incorrectly precued stimuli even if the eyes remained fixed in the middle of the screen. Thus, attention is dissociable from eye positions. Where you look may reflect where you attend, but where you attend can be divorced from where you direct your eyes. Basketball players know this. They try to fake out their opponents by looking one way before bounding off in the other direction.
There are at least two reasons why eye movements are dissociable from attention. One is that shifts of attention may help trigger eye movements. Unless you believe eyes dart from place to place on a totally random basis, you need to assume that some inner state—call it attention—impels the eyes to dart where they do. The other reason why eye movements are dissociable from attention is that eye movements can be inhibited. You probably know this from personal experience. If you’re looking somewhere you shouldn’t, you can force your eyes to go somewhere else in a hurry.
Are Posner’s data explained, then, by the jungle principle? I think so. Detection times can be seen as times to complete two inner battles. The first battle is the one between detecting or not detecting a new stimulus. Recall that Posner’s participants merely pressed a button when a stimulus appeared. His participants didn’t have to press one button or another depending on which stimulus came on. All they had to do was detect the stimulus and show when they did this.
The second battle was deciding to make a response rather than withholding a response. Getting an informative prime in the Posner cuing paradigm presumably excited the neural systems associated with the relevant perceptual and response events. Presumably, too, getting an informative prime inhibited the neural systems associated with their antagonists. The combination of excitation and inhibition allowed for the speed-ups that occurred when stimulus onsets were accurately heralded. Excitation and inhibition probably also caused the slow-downs that occurred when the information given in advance was inaccurate.
20
With all this talk of inner inhibition and excitation, you might wonder whether I’m being too militaristic. “Must you speak of such a hostile interior?” you might ask. “I hate the idea that my mind’s a jungle with feuding foes!” you might exclaim.
Let me rephrase. The claim that it’s a jungle in there really boils to two overarching claims, neither of which needs to be controversial and both of which, when combined, are meant only to integrate what has been appreciated for a long time in cognitive psychology. One claim is that the nervous system relies on excitation and inhibition. The other is that it’s imperative that a theory of mental function not fall into the trap of positing executives who know more than others in the neural arboretum. Regarding the second claim, believing that there’s a Mr. or Ms. Know-It-All begs the question of how s/he knows all s/he does. Positing such an inner agent amounts to kicking the cognitive can down the road.
Granting these things, you may still ask whether both ingredients of the jungle principle—competition and cooperation—are needed to account for Posner’s data, not to mention the other data obtained in attention laboratories. “Maybe excitation alone can tell the story,” you might muse.
21
One reason to posit inhibition is that if only excitation existed, the excitation could grow endlessly. Real physical and biological systems have limits. It’s unrealistic to say that excitation builds up with no end in sight.
“Well, all right,” you might reply, “but still, the excitation could subside over time.” To this I’d say, “Yes, it’s possible the excitation could die down passively. That would help solve the too-much-of-a-good-thing problem. But there’s a difficulty with relying entirely on passive decay. Attentional dynamics would slow to a crawl. For speedy changes, the best mechanism is one that manages quick shifts of inhibition and excitation. The ups and downs, suitably dialed, can produce the rapid shifts of attention that all of us display.”
The third reason to allow for inhibition as well as excitation is that the nervous system relies extensively on both forms of interaction. You don’t have to look hard for inhibition in the nervous system; it’s pervasive. The extensive reliance on inhibition, as found in neurophysiology (see
Chapter 3
), provides the third reason why inhibition should be invoked in accounts of attention. The nervous system relies on inhibition in all it does. There’s no reason why it shouldn’t rely on inhibition in attention or, for that matter, in any other cognitive domain.
Fourth and finally, there is direct evidence for inhibition in attention. Some of the most telling support comes from other work by Michael Posner.
Here, he noticed that when participants were shown stimuli at locations where the stimuli were recently presented, the participants took longer than usual to detect them. It was as if the participants were inhibited from returning to those locations. Posner called this phenomenon
inhibition of return
.
22
The term may remind you of the title of a famous novel by Thomas Wolfe,
You Can’t Go Home Again
.
23
Posner, along with Asher Cohen, showed that you
can
go home again, at least in the modest world of looking back to a recent stimulus location. But if you do, it takes longer to get there than to go somewhere else.
Researchers who have studied inhibition of return have suggested that the phenomenon is explicable in terms of prediction and responsiveness. Being responsive to stimuli at
new
locations makes more sense than being responsive to stimuli at locations you’ve been to lately. If you’re a hungry canine who’s just wolfed down a field mouse, the odds are low that another mouse will emerge soon from that same silo. The rodent roommates at the wolfing site will get the message that they’d better lay low for a while. From the wolf’s perspective, it makes more sense to look elsewhere than to keep monitoring the same spot. Such reasoning may underlie inhibition of return at the attentional level.