The First Word: The Search for the Origins of Language (18 page)

BOOK: The First Word: The Search for the Origins of Language
5.07Mb size Format: txt, pdf, ePub

Homesign may represent an extreme example of the way that gesture and speech interact, but other recent experiments have demonstrated how speech and gesture can depend on each other. It’s been shown that adults will gesture differently depending on the language they are speaking and the way that their language encodes specific concepts, like action. For example, experimenters have compared the idiosyncratic way that Turkish and English speakers describe a cartoon that depicts a character rolling down a hill. Asli Özyürek, a research associate at the Max Planck Institute for Psycholinguistics, compared the performance of children and adults in this task. She showed that initially children produce the same kinds of gestures regardless of the language they are speaking. It takes a while for gesture to take on the characteristic forms of a specific language. When it does, people change their gestures depending on the syntax of the language they are speaking. At this stage, instead of gesture’s providing occasional, supplementary meaning to speech without being connected to it in any real way, language and gesture appear to interact online in expression.

In another experiment Goldin-Meadow asked children and adults to solve a particular type of math problem.
13
After they completed the task, the participants were asked to remember a list of words (for the children) and letters (for the adults). Subjects were then asked to explain at a blackboard how they had solved the problem. Goldin-Meadow and her colleagues found that when the experimental subjects gestured during their explanation, they later remembered more from the word list than when they did not gesture. She noted that while people tend to think of gesturing as reflecting an individual’s mental state, it appears that gesture contributes to shaping that state. In the case of her subjects, their gesturing somehow lightened the mental load, allowing them to devote more resources to memory.

Gesture interacts with thought and language in other complicated ways. In another experiment Goldin-Meadow asked a group of children to solve a different kind of problem.
14
She then videotaped them describing the solution and noted the way they gestured as they answered. In one case, the children were asked if the amount of water in two identical glasses was the same. (It was.) One of the glasses was then poured into a low and wide dish. The children were asked again if the amount of water was the same. They said it wasn’t. They justified their response by describing the height of the water, explaining it’s different because this one is taller than that one. As they spoke, some of the children produced what Goldin-Meadow calls a gesture-speech match; that is, they said the amounts of water in the glass and the bowl were unequal, and as they did, they indicated the different heights of the water with their gesture (one hand at one height, the other hand at the other height). Other children who got the problem wrong showed an interesting mismatch between their gesture and their speech. Although these children also said that the amount of the water was different because the height was different, gesturally they indicated the width of the dishes. “This information,” said Goldin-Meadow, “when integrated with the information in speech, hints at the correct answer—the water may be higher but it’s also skinnier.”

The mismatch children suggested by their hand movements that they knew unconsciously what the correct response was. And it turned out that when these children were taught what the relationship between the two amounts of water was after the initial experiment, they were much closer to comprehension than those whose verbal and gestural answers matched—and were wrong.

Gestures also affect listeners. In another experiment children were shown a picture of a character and later asked what he had been wearing. As the researcher posed the question, she made a hat gesture above her head. The children said that the character was wearing a hat even though he wasn’t.

Such complicated dependencies and interactions demonstrate that speech and gesture are part of the same system, say Goldin-Meadow and other specialists. Moreover, this system, made up of the two semi-independent subsystems of speech and gesture, is also closely connected to systems of thought. Perhaps we should designate another word entirely for intentional communication that includes gesture and speech. Whatever it should be, Goldin-Meadow and others have demonstrated that this communication is fundamentally embodied.
15

The most important effect of this research is that it makes it impossible to engage with the evolution of modern language without also considering the evolution of human gesture. Precisely how gesture and speech may have interacted since we split from our common ancestors with chimpanzees is still debated. Michael Corballis, who wrote
From Hand to Mouth: The Origins of Language,
has suggested that quite complicated manual, and possibly facial, gesture may have preceded speech by a significant margin, arising two million years ago when the brains of our ancestors underwent a dramatic burst in size. The transition to independent speech from this gesture language would have occurred gradually as a result of its many benefits, such as communication over long distances and the ability to use hands for other tasks, before the final shift to autonomous spoken language. Other researchers stress how integral gesture is to speech today, arguing that even as the balance of speech and gesture may have shifted within human communication, it is unlikely that gesture would have evolved first without any form of speech. David McNeill, head of the well-known McNeill Laboratory Center for Gesture and Speech Research at the University of Chicago, and colleagues propose that from the very beginning it is the combination of speech and gestures that were selected in evolution. What about the other side of the coin—what about speech? It is not as ancient as gesture, but when did it evolve? And how closely related is speech to the vocal communication of other animals?

8.
You have speech
 

E
ven though more research has been conducted on primate vocalizations than on primate gesture, it has been considerably less productive. Vocalization in nonhuman animals is much less flexible than gesture. Most vocalizations, like alarm calls, seem to be instinctive and specific to the species that produces them. Many kinds of animals that are raised in isolation or fostered by another species still grow up to produce the calls of their own kind. Researchers at the Neurosciences Institute in San Diego transplanted brain tissue from the Japanese quail to the domestic chicken; the resulting birds, called chimeras, spontaneously produced some quail calls as they matured.
1
And unlike human talkers, vocalizing animals seem to be pretty indifferent to their listeners. Vervets, for example, typically produce alarm calls whether there are other monkeys around or not. Even though we still have a lot to learn about calls in the wild, it appears that there are relatively few novel calls in ape species. What’s more, apes don’t seem to make individually distinctive calls, even though other monkeys—which are more distantly related to us—do.

One of the biggest differences between ape gesture and vocalizing is that many communicative gestures appear to be voluntary and intentional in a way that sound is not. Still, the involuntary nature of animal vocalizations has been somewhat exaggerated. It is said, for example, that when apes make a sound it is always an emotional response and not really generated by choice (in contrast with gesture, which is demonstrably voluntary). In recent years, this position has had to shift to accommodate some interesting findings about the rudiments of control in the vocal domain. Evidence exists, for example, that chimps can suppress calls in dangerous situations where a loud noise would draw attention to them. Some orangutans make kissing sounds when they bed for the night. Kissing is not instinctive, it’s volitional—one of those cultural traditions that distinguish groups within a species from one another.

In a recent experiment Katie Slocombe and Klaus Zuberbühler (who earlier demonstrated the ability of zoo chimpanzees to distinguish between types of food with wordlike calls) found that wild chimpanzees seem to adjust their screams based on the role they play in a fight. The researchers looked at two different types of screams in the wild chimpanzees of the Budongo Forest in Uganda. In a conflict situation, the animals typically produce a victim scream, in which the pitch is very consistent, and an aggressor scream, where the pitch varies, with a fall at the end. Other chimps appear to use this information, said Slocombe. The researcher witnessed one exchange in which a young male was harassing a female chimp that was giving loud victim screams in response. At one point, said Slocombe, the female had clearly had enough and began instead to make aggressor screams back at the young male. She was then joined by another female in retaliating against the male. The second female appeared from out of sight, so she must have used the information in the first female’s scream to make her decision. “Normally,” said Slocombe, “chimpanzees will see parts of the fight, and therefore it is impossible to tell if they are attending to the information in the screams or just what they see.”

Slocombe was interested in establishing whether any particular information about a given situation was reliably communicated by the chimpanzee screams. She recorded examples of victim screams and noted the circumstances in which they occurred. An analysis of her recordings showed that it was possible to distinguish from the screams alone between high-risk situations and low-risk ones. In the first case, the screams tended to be long and high-pitched, whereas in low-risk situations the screams were shorter and lower in pitch.

There are other intriguing connections between the way we use our mouths and the way other apes do. Researchers have noted a peculiar feature of gesture that appears to be shared between humans and chimpanzees. Imagine a child learning how to write, his hand determinedly grasping the pencil and his tongue sticking out of the side of his mouth. Or visualize a seamstress biting her lips as she sews a small thread. Such unconscious mouth movements often accompany fine hand movement in humans. Of course, mouth and hand movements co-occur with speech and gesture, but in this case it seems that the mouth movement follows the hands (not the other way around). Experiments have shown that fine motor manipulation of objects by chimps is often accompanied by sympathetic mouth movements. The finer the hand movements are, the more chimps seem to move their mouths. David Leavens suggests that the basic connection between mouth and hand in primates could date back at least fourteen million years, to the common ancestor of human and orangutan.

Despite such new insights into the utterances of other apes, a vast gap remains between the apparent vocal abilities of all primates and the speech abilities of human beings. Speech starts simply enough with air in the lungs. The air is forcefully expelled in an exhalation, and it makes sound because of the parts of the body it blows over and through—the vibrating vocal cords, the flapping tongue, and the throat and mouth, which rapidly opens and closes in an odd, yapping munch. It’s easy to underestimate the athletic precision employed by the many muscles of the face, tongue, and throat in orchestrating speech. When you talk, your face has more moves than LeBron James.

It takes at least ten years for a child to learn to coordinate lips, tongue, mouth, and breath with the exacting fine motor control that adults use when they talk. To get an idea of the continuous and complicated changes your vocal tract goes through in the creation of speech, read the next paragraph silently, letting your mouth move but making no sound—just
feel
the process.

What’s amazing about speech is that when you’re on the receiving end, listening to the noise that comes out of people’s mouths, you instantaneously
hear
meaningful language. Yet speech is just sound, a semicontinuous buzz that fluctuates rapidly and regularly. Frequencies rise and fall, harmonics within the frequencies change their relationships to one another, air turbulence increases and dies away. It gets loud, and then it gets quiet.

The rate of the vocal cords’ vibration is called the fundamental frequency, an important component of speech. Perhaps the most significant aspects of the sound we make are the formant frequencies, the set of frequencies created by the entire shape of the vocal tract. When you whisper, your vocal cords don’t vibrate and there is no fundamental frequency, but people can still understand you because of the formant frequencies in the sound.

Overall, the variations in loudness, pitch, and length in speech that we think of as the intonation of an utterance help structure the speech signal while also contributing to its meaning. Prosody, the rise and fall of pitch and loudness, can be emotional, it can signal contrast, and can help distinguish objects in time and space (“I meant
this
one, not
that
one”). Prosodic meaning can be holistic, like gesture. It can signal to the listener what a speaker thinks about what he is saying—how sure he is of something, whether it makes him them sad or happy. When people make errors in what they are saying, they can use intonation to guide listeners to the right interpretation. Prosody can also mark structural boundaries in speech. At the end of a clause or phrase, speakers will typically lengthen the final stressed syllable, insert a pause, or produce a particular pitch movement.

Even though we hear one discrete word after another when listening to a speaker, there’s no real silence between the words in any given utterance, so comprehension needs to happen quickly. Whatever silence does fall between words does so mostly as a matter of coincidence—as a rule, when sounds like
k
and
p
are made (like at the beginning and end of “cup”). These consonants are uttered by completely, if briefly, blocking the air flowing from your lungs. (Make a
k
sound, but don’t release it, and then try to breathe.) So while a sentence like “Do you want a cup of decaffeinated coffee?” may be written with lots of white space to signify word breaks, the small silences within the sound stream don’t necessarily correspond to the points in between words.

The beginning of speech is found in the babbling of babies. At about five months children start to make their first speech sounds. Researchers say that when babies babble, they produce all the possible sounds of all human languages, randomly generating phonemes from Japanese to English to Swahili. As children learn the language of their parents, they narrow their sound repertoire to fit the model to which they are exposed. They begin to produce not just the sound of their native language but also its classic intonation patterns. Children lose their polymath talents so effectively that they ultimately become unable to produce some language sounds. (Think about the difficulty Japanese speakers have pronouncing English
l
and
r.
)

While very few studies have been conducted on babbling in humans, SETI (Search for Extraterrestrial Intelligence) Institute researcher Laurance Doyle and biologist Brenda McCowan and colleagues discovered that dolphin infants also pass through a babbling phase. (In 2006 German researchers announced that baby bats babble as well.) In the dolphin investigation Doyle and McCowan used two mathematical tools known as Zipf’s law and entropy. Zipf’s law was first developed by the linguist George Zipf in the 1940s. Zipf got his graduate students to count how often particular letters appeared in different texts, like
Ulysses,
and plotted the frequency of each letter in descending order on a log scale. He found that the slope he had plotted had a–1 gradient. He went on to discover that most human languages, whether written or spoken, had approximately the same slope of–1. Zipf also established that completely disordered sets of symbols produce a slope of 0. This meant there was no complexity in that particular text because all elements occurred more or less equally. Zipf applied the tool to babies’ babbling, and the resulting slope was closer to the horizontal, as it should be if infants run randomly through a large set of sounds in which there is little, if any, structure.

When Doyle and McCowan applied Zipf’s law to dolphin communication, they discovered that, like human language, it had a slope of–1. A dolphin’s signal was not a random collection of different sounds, but instead had structure and complexity. (Doyle and his colleagues also applied Zipf’s law to the signals produced by squirrel monkeys, whose slope was not as steep as the one for humans and dolphins (–0.6), suggesting they have a less complex form of vocalization.
2
Moreover, the slope of baby dolphins’ vocalizations looked exactly like that of babbling infants, suggesting that the dolphins were practicing the sounds of their species, much as humans do, before they began to structure them in ordered ways.

The scientists also measured the entropy of dolphin communication. The application of entropy to information was developed by Claude Shannon, who used it to determine the effectiveness of phone signals, by calculating how much information was actually passing through a given phone wire. Entropy can be measured regardless of what is being communicated because instead of gauging meaning, it computes the information content of a signal. The more complex a signal is, the more information it can carry. Entropy can indicate the complexity of a signal like speech or whistling, even if the person measuring the signal doesn’t know what it means. In fact, SETI plans to use entropy to evaluate signals from outer space: if we ever receive an intergalactic message and can’t decode its meaning, we can apply entropy to give us an idea about the intelligence of the beings that transmitted the signal even if we can’t decode the message itself.

The entropy level indicates the complexity of a signal, or how much information it might hold, such as the frequency of elements within the signal and the ability to make a prediction about what will come next in the signal, based on what has come before. Human languages are approximately ninth-order entropy, which means that if you had a nine-word (or shorter) sequence from, say, English, you would have a chance of guessing what might come next. If the sequence is ten words or more, you’ll have no chance of guessing the next word correctly. The simplest forms of communication have first-order entropy.
3
Squirrel monkeys have second-or third-order, and dolphins measure higher, around fourth-order. They may be even higher, but to establish that, we would need more data. Doyle plans to record a number of additional species, including various birds and humpback whales.

 

 

 

Many of the researchers interviewed for this book would stop in the middle of a conversation to illustrate a point, whether it concerned the music of protolanguage or the way that whales have a kind of syntax, by imitating the precise sound they were discussing. Tecumseh Fitch sat at a restaurant table making singsong
da-da da-da da-DA
sounds. Katy Payne, the elephant researcher, whined, keened, and grunted like a humpback whale in a small office at Cornell. Michael Arbib, the neuroscientist, stopped to purse his lips and make sucking sounds. In a memorable radio interview, listeners heard the diminutive Jane Goodall hoot like a chimpanzee.

As well as demonstrating the point at hand, the researchers’ performances illustrated on another level one of the fundamental platforms of language—vocal imitation. Imitation is as crucial to the acquisition of speech as it is to learning gesture (another way in which these systems look like flip sides of the same coin). Humans are among the best vocal imitators in the animal world, and this is one area in which we are unique in our genetic neck of the woods. Even though chimpanzees do a great job of passing on gestural traditions and tool use in their various groups, they don’t appear to engage in a lot of imitation of one another’s cries and screeches. Orangutans must have some degree of imitation in the vocal domain, otherwise they couldn’t have developed the “goodnight kiss” tradition. But humans have taken the rudiments of this ability and become virtuosos.

Other books

1.069 Recetas by Karlos Arguiñano
Mz Mechanic by Ambrielle Kirk
Cocotte by David Manoa
Eternal (Dragon Wars, #2) by Rebecca Royce
Moonlight Falls by Vincent Zandri
Sweet Nothing by Richard Lange
Maggie Bright by Tracy Groot
The Last Broken Promise by Grace Walton