Read It's a Jungle in There: How Competition and Cooperation in the Brain Shape the Mind Online
Authors: David A. Rosenbaum
To convey the alternative hypotheses that Sternberg considered, let me tell you how I teach about them in my Intro-Cognitive class. First, I ask some
students to come to the front of the classroom to serve as representatives of the memory items. One student agrees to represent “W,” another student agrees to represent “G,” and so on. All the students stand in a line facing the class, and I ask another student to face this line of “memory elements.” That student’s job is to play the role of stimulus generator. On each trial, he or she calls out a letter that may or may not be in the memory set. I then illustrate how, according to different models, the test letter could be classified as belonging to the set or not.
FIGURE 5.
Possible outcomes of Sternberg’s memory search experiment.
Top left
: Serial self-terminating search.
Top right
: Serial exhaustive search.
Bottom left
: Unlimited-capacity parallel search.
Bottom right:
Limited-capacity parallel search.
In one scenario, after I hear the generator call out “W,” for example, I run up to a student memory representative at one end of the line and ask him or her, “Are you W?” If the student answers “Yes,” I turn around and yell “Yes.” If the student answers “No,” I run up to the next student and ask, “Are
you
W?” If that student says “Yes,” I turn around and yell “Yes.” I continue this one-on-one interrogation, stopping if a student responds affirmatively or continuing to the next student if not. At the very end, if the last student in the line does not say “Yes” in response to the question, “Are you W?” I shout out “No.” This method is called
serial self-terminating
search. It’s a memory search
method that’s
serial
because each item in the set is checked one at a time, and it’s
self-terminating
because it stops on its own when a match is found. The serial self-terminating search method makes intuitive sense. If you look for something, you might as well stop when you’ve found it.
What prediction can be made from the serial self-terminating model? As shown in
Figure 5
, the model predicts that RTs for “Yes” responses should increase with the number of items in the memory set. It also predicts that RTs for “No” responses should likewise increase with the number of elements in the memory set, but with a slope about twice as steep as for “Yes” RTs. The reason is that “No” responses are given after all the items in the set have been checked. “Yes” responses, by contrast, are given, on average, after
half
the items have been checked.
Now consider another possible search model. It relies on serial
exhaustive
scanning. Here, each item is checked one at a time, and at the end of the series of checks, the “checker” determines whether a match was found. If a match was found, his or her response is “Yes,” but if a match was not found, his or her response is “No.” When I demonstrate this method to my students, I run up to one student after the other in the memory lineup and ask him or her, “Are you W?” (provided, of course, that “W” is the test stimulus). Each student answers “Yes” or “No.” When I finish polling all the students in the line, I ask out loud, “Did I hear a Yes?” Then I answer my own question. If I answer “Yes,” I yell out “Yes.” If I answer “No,” I yell out “No.” My final, answer constitutes my response to the original query. When I play-act this model, the students laugh because the strategy seems ridiculous.
The model I just described is called serial exhaustive search because it involves a serial check (one item after another) and it’s exhaustive; every item is checked. This model makes a different prediction about the relation between RT and memory set size than does the serial self-terminating model. As shown in
Figure 5
, the serial self-terminating model predicts that “No” RTs should increase with memory set size with a slope that is twice as large as the slope for “Yes” RTs. The serial exhaustive model predicts, by contrast, that “No” RTs and “Yes” RTs should increase at the
same
rate with the number of memory elements. The reason is that all the items in the memory set are always checked, regardless of whether the response ultimately turns out to be “Yes” or “No.”
Before I show the students the data from Sternberg’s experiment, I ask them to consider one more model. According to this other model, there’s no little man or woman in the brain who checks each item one after the other. Instead, all the items are checked at the same time, in a so-called parallel search. I illustrate this method by asking the students in the lineup
to shout “Yes” if they hear their letter name or to shout “No” if they don’t hear their letter name. If a “Yes” is shouted by someone in the lineup, then that’s the response. If only responses of “No” are shouted, then the response is “No.” This model (without special assumptions) predicts that RTs should not depend on memory set size, at least if the number of shouters is unimportant. This is a different prediction than the ones made by the two previous models. Both of them predicted that RT would increase as the memory set increased.
Before I show the data that Sternberg actually obtained, I ask the students in the classroom which model they think was actually supported in Sternberg’s experiment. About an equal number vote for the parallel model and the serial self-terminating model. Virtually no one votes for the serial exhaustive model.
Finally, I show the data. When I do, the students who have been paying attention gasp. Some of them drop their jaws in disbelief. “Are you sure you didn’t make a mistake here, Professor?” one of the students sometimes asks.
The reason the students are so surprised is that the results conform to the least popular, least intuitive model, the serial exhaustive model. When I ask the students how this could be, either they shrug and wait for my explanation or they offer an interesting rationale. “When you take a multiple-choice test,” they sometimes say, “it makes sense to check all your answers even if you find one that seems right at first.”
This rationale provides one possible reason to take the serial exhaustive model seriously. Another comes from a consideration of the slope of the straight line fitted to the mean RTs plotted as a function of the number of items in the memory set. The slope of this straight line provides an estimate of the speed of memory scanning, the rate at which the elements of the memory set are supposedly interrogated, one after the other. The rate is impressive: around 40 ms per item. If there’s a little man or woman running from one memory item to the next, he or she is a quick little devil! So impressive is the scanning rate that Sternberg titled his paper
High-Speed Scanning in Human Memory
.
My students are duly impressed with how quickly the little men or women in their brains run around. “Wow!” some of them exclaim. “That’s really fast!”
“Yes it is,” I reply. “But hang on a moment,” I continue. “Before you get too excited by this estimate of memory search speed, let’s do the following simple task. I’ll ask you a question and then I’d like you to yell out your answer as quickly as you can. Ready?” I wait for their grudging affirmation and then ask them, with as much rah-rah as I can muster to spur their enthusiasm, “Is the following a word?…Blig!”
“No,” the students reply a fraction of a second later. I pause a moment to see whether any of them get the point. I remind them that if each of them searched their memory at a rate of 40 ms per item, they wouldn’t be able to say whether “blig” is a word so quickly. A typical college student knows about 40,000 words, I tell the students. How long, then, would it take to determine whether “blig” is a word if each item in memory were checked at a rate of 40 ms per item? The answer is 40,000 × 40 ms = 1,600,000 ms = 1,600 s, or 26.67 minutes!
What’s wrong here? How can it take just a fraction of a second to indicate that “blig” is a non-word, though the rate of memory scanning (40 ms per item) predicts that it should take nearly half an hour to say it’s not? One possibility is that serial exhaustive scanning applies only to very small memory sets. Logically, it’s hard to eliminate this possibility, but it would be preferable to avoid special accounts for special circumstances. An explanation that applies to all circumstances would be simpler. It’s better to pursue simpler explanations than complicated ones, a principle known as Occam’s razor.
11
Perhaps there’s some other model that can account for data from the small-memory-set task of Sternberg versus the data from the lexical decision task: “Is
blig
a word?” Though I haven’t described much more data from the lexical decision task—I’ll say more about the task later in this chapter—I can share with you that I think there is such a model. As you can anticipate, I think it’s one that accords with the jungle principle.
Suppose memory elements have neural representatives that clamor for activation. When some of those elements are identified as part of a memory set for a Sternberg experiment, they get excited. However, the degree to which they get excited is limited by the number of elements in the activated set, as shown in the bottom right panel of
Figure 5
.
12
As more elements occupy the set, the less loudly any of them can shout, or equivalently, the less clearly any of them can be heard by the response selector. This model can be expressed mathematically in a way that predicts that RTs will rise less and less as more items are activated. What’s nice about the model is that it allows all the choice RTs to line up on one theoretical curve, a curve that contains the choice RTs for small memory sets like those studied by Sternberg, and choice RTs for immense memory sets like those probed in lexical decision experiments. No special account is needed for making choices in one part of the range or the other, so this is a simpler and therefore preferable model.
13
Cognitive psychologists have a name for the model I’ve just described. They (we) call it a
limited-capacity parallel search
model. You already know what a
parallel
search model is. It’s one where all the elements are searched at once. The way I prefer to think of parallel search is that all the elements clamor for activation simultaneously, occupying an environment where it’s up to them to get the activation they want. The way I think about parallel search as opposed to serial search is not just in terms of simultaneous versus sequential evaluation, but also in terms of the number of searchers. Just one searcher is involved in serial search, but many searchers are involved in parallel search. The many searchers in parallel search are all the cognitive creatures in the cranium, hoping, as it were, to be called upon. Some of those creatures are more eager than others. The ones who are especially eager are the ones encouraged by being selected for membership in the active memory set. Others not in that set are more remote or more retiring. By avoiding the need for one searcher, as required in the serial search, there is no need to establish a roadmap for where and when he, she, or it should search. With a parallel model, no itinerary is needed.
What is a
limited-capacity
parallel search model? It’s a model in which the capacity of the system to support parallel search is restricted. As more elements clamor for activation, the rate at which each of them can accrue activation following presentation of a test stimulus dwindles, as if some resource is in short supply. No matter what that resource may be—oxygen, glucose, or something else—the activated elements compete for it.
14
How does the competition work? Are the pre-activated elements doled out their needed elixir in inverse proportion to their number, or do the pre-activated elements duke it out, as it were, inhibiting each other to a degree that depends on how many of them there are?
I favor the “duke-it-out” option over the “doler” option because, with the doler option, you have to assume some special energy source distributed to the memory elements. What that neural tonic is, is a mystery. It might be oxygen or it might be glucose, both of which are needed for neurons to survive, but no one has ever been able to say what the mysterious resource might be. Studies of attention have failed to pinpoint any single resource for which mental elements compete. Another problem with the doler option is that someone or something must do the doling. Postulating such an agent begs the question of who’s in charge. If activated memory elements inhibit each other, no homunculus is needed to determine how the doling should be done. So I prefer the duke-it-out view of memory, where all the pre-activated memory elements compete for recognition. The more of them that are pre-activated, the tougher the fight.
Deciding whether competition is really needed in a model of cognition can be tricky. In the context you’ve just been considering—the recognition task of Sternberg—competition can be avoided altogether, even granting a limited-capacity parallel search, by saying that the response selector has a harder time hearing a “Yes” amidst a chorus of many responses of “No” than hearing a “Yes” amidst a chorus of fewer responses of “No.” Such a simple fact of discrimination may explain the data without the need for squabbling among cognitive contenders.