You are not a Gadget: A Manifesto (26 page)

BOOK: You are not a Gadget: A Manifesto
9.21Mb size Format: txt, pdf, ePub
ads

Rama’s canonical example is encapsulated in an experiment known as bouba/kiki. Rama presents test subjects with two words, both of which are pronounceable but meaningless in most languages: bouba and kiki.

Then he shows the subjects two images: one is a spiky, hystricine shape and the other a rounded cloud form. Match the words and the images! Of course, the spiky shape goes with kiki and the cloud matches bouba. This correlation is cross-cultural and appears to be a general truth for all of humankind.

The bouba/kiki experiment isolates one form of linguistic abstraction. “Boubaness” or “kikiness” arises from two stimuli that are otherwise utterly dissimilar: an image formed on the retina versus a sound activated in the cochlea of the ear. Such abstractions seem to be linked to the mental phenomenon of metaphor. For instance, Rama finds that patients who have lesions in a cross-modal brain region called the inferior parietal lobule have difficulty both with the bouba/kiki task and with interpreting proverbs or stories that have nonliteral meanings.

Rama’s experiments suggest that some metaphors can be understood as mild forms of synesthesia. In its more severe forms, synesthesia is an intriguing neurological anomaly in which a person’s sensory systems are crossed—for example, a color might be perceived as a sound.

What is the connection between the images and the sounds in Rama’s experiment? Well, from a mathematical point of view, kiki and the spiky shape both have “sharp” components that are not so pronounced in bouba; similar sharp components are present in the tongue and hand motions needed to make the kiki sound or draw the kiki picture.

Rama suggests that cross-modal abstraction—the ability to make consistent connections across senses—might have initially evolved in lower primates as a better way to grasp branches. Here’s how it could have
happened: the cross-modal area of the brain might have evolved to link an oblique image hitting the retina (caused by viewing a tilted branch) with an “oblique” sequence of muscle twitches (leading the animal to grab the branch at an angle).

The remapping ability then became coopted for other kinds of abstraction that humans excel in, such as the bouba/kiki metaphor. This is a common phenomenon in evolution: a preexisting structure, slightly modified, takes on parallel yet dissimilar functions.

But Rama also wonders about other kinds of metaphors, ones that don’t obviously fall into the bouba/kiki category. In his current favorite example, Shakespeare has Romeo declare Juliet to be “the sun.” There is no obvious bouba/kiki-like dynamic that would link a young, female, doomed romantic heroine with a bright orb in the sky, yet the metaphor is immediately clear to anyone who hears it.

Meaning Might Arise from an Artificially Limited Vocabulary

A few years ago, when Rama and I ran into each other at a conference where we were both speaking, I made a simple suggestion to him about how to extend the bouba/kiki idea to Juliet and the sun.

Suppose you had a vocabulary of only one hundred words. (This experience will be familiar if you’ve ever traveled to a region where you don’t speak the language.) In that case, you’d have to use your small vocabulary creatively to get by. Now extend that condition to an extreme. Suppose you had a vocabulary of only four nouns: kiki, bouba, Juliet, and sun. When the choices are reduced, the importance of what might otherwise seem like trivial synesthetic or other elements of commonality is amplified.

Juliet is not spiky, so bouba or the sun, both being rounded, fit better than kiki. (If Juliet were given to angry outbursts of spiky noises, then kiki would be more of a contender, but that’s not our girl in this case.) There are a variety of other minor overlaps that make Juliet more sunlike than boubaish.

If a tiny vocabulary has to be stretched to cover a lot of territory, then any difference at all between the qualities of words is practically a world
of difference. The brain is so desirous of associations that it will then amplify any tiny potential linkage in order to get a usable one. (There’s infinitely more to the metaphor as it appears in the play, of course. Juliet sets like the sun, but when she dies, she doesn’t come back like it does. Or maybe the archetype of Juliet always returns, like the sun—a good metaphor breeds itself into a growing community of interacting ideas.)

Likewise, much of the most expressive slang comes from people with limited formal education who are making creative use of the words they know. This is true of pidgin languages, street slang, and so on. The most evocative words are often the most common ones that are used in the widest variety of ways. For example: Yiddish:
Nu?
Spanish:
Pues
.

One reason the metaphor of the sun fascinates me is that it bears on a conflict that has been at the heart of information science since its inception: Can meaning be described compactly and precisely, or is it something that can emerge only in approximate form based on statistical associations between large numbers of components?

Mathematical expressions are compact and precise, and most early computer scientists assumed that at least part of language ought to display those qualities too.

I described above how statistical approaches to tasks like automatic language translation seem to be working better than compact, precise ones. I also argued against the probability of an initial, small, well-defined vocabulary in the evolution of language and in favor of an emergent vocabulary that never became precisely defined.

There is, however, at least one other possibility I didn’t describe earlier: vocabulary could be emergent, but there could also be an outside factor that initially makes it difficult for a vocabulary to grow as large as the process of emergence might otherwise encourage.

The bouba/kiki dynamic, along with other similarity-detecting processes in the brain, can be imagined as the basis of the creation of an endless series of metaphors, which could correspond to a boundless vocabulary. But if this explanation is right, the metaphor of the sun might come about only in a situation in which the vocabulary is at least somewhat limited.

Imagine that you had an endless capacity for vocabulary at the same time that you were inventing language. In that case you could make up
an arbitrary new word for each new thing you had to say. A compressed vocabulary might engender less lazy, more evocative words.

If we had infinite brains, capable of using an infinite number of words, those words would mean nothing, because each one would have too specific a usage. Our early hominid ancestors were spared from that problem, but with the coming of the internet, we are in danger of encountering it now. Or, more precisely, we are in danger of pretending with such intensity that we are encountering it that it might as well be true.

Maybe the modest brain capacity of early hominids was the source of the limitation of vocabulary size. Whatever the cause, an initially limited vocabulary might be necessary for the emergence of an expressive language. Of course, the vocabulary can always grow later on, once the language has established itself. Modern English has a huge vocabulary.

Small Brains Might Have Saved Humanity from an Earlier Outbreak of Meaninglessness

If the computing clouds became effectively infinite, there would be a hypothetical danger that all possible interpolations of all possible words—novels, songs, and facial expressions—will cohabit a Borges-like infinite Wikipedia in the ether. Should that come about, all words would become meaningless, and all meaningful expression would become impossible. But, of course, the cloud will never be infinite.

*
Given my fetish for musical instruments, the NAMM is one of the most dangerous—i.e., expensive—events for me to attend. I have learned to avoid it in the way a recovering gambler ought to avoid casinos.


The software I used for this was developed by a small company called Eyematic, where I served for a while as chief scientist. Eyematic has since folded, but Hartmut Neven and many of the original students started a successor company to salvage the software. That company was swallowed up by Google, but what Google plans to do with the stuff isn’t clear yet. I hope they’ll come up with some creative applications along with the expected searching of images on the net.

*
Current commercial displays are not quite aligned with human perception, so they can’t show all the colors we can see, but it is possible that future displays will show the complete gamut perceivable by humans.

PART FIVE
Future Humors

 

IN THE PREVIOUS SECTIONS
, I’ve argued that when you deny the specialness of personhood, you elicit confused, inferior results from people. On the other hand, I’ve also argued that computationalism, a philosophical framework that doesn’t give people a special place, can be extremely useful in scientific speculations. When we want to understand ourselves on naturalistic terms, we must make use of naturalistic philosophy that accounts for a degree of irreducible complexity, and until someone comes up with another idea, computationalism is the only path we have to do that.

I should also point out that computationalism can be helpful in certain engineering applications. A materialist approach to the human organism is, in fact, essential in some cases in which it isn’t necessarily easy to maintain.

For instance, I’ve worked on surgical simulation tools for many years, and in such instances I try to temporarily adopt a way of thinking about people’s bodies as if they were fundamentally no different from animals or sophisticated robots. It isn’t work I could do as well without the sense of distance and objectivity.

Unfortunately, we don’t have access at this time to a single philosophy that makes sense for all purposes, and we might
never find one. Treating people as nothing other than parts of nature is an uninspired basis for designing technologies that embody human aspirations. The inverse error is just as misguided: it’s a mistake to treat nature as a person. That is the error that yields confusions like intelligent design.

I’ve carved out a rough borderline between those situations in which it is beneficial to think of people as “special” and other situations when it isn’t.

But I haven’t done enough.

It is also important to address the romantic appeal of cybernetic totalism. That appeal is undeniable.

Those who enter into the theater of computationalism are given all the mental solace that is usually associated with traditional religions. These include consolations for metaphysical yearnings, in the form of the race to climb to ever more “meta” or higher-level states of digital representation, and even a colorful eschatology, in the form of the Singularity. And, indeed, through the Singularity a hope of an afterlife is available to the most fervent believers.

Is it conceivable that a new digital humanism could offer romantic visions that are able to compete with this extraordinary spectacle? I have found that humanism provides an even more colorful, heroic, and seductive approach to technology.

This is about aesthetics and emotions, not rational argument. All I can do is tell you how it has been true for me, and hope that you might also find it to be true.

CHAPTER 14
Home at Last (My Love Affair with Bachelardian Neoteny)

HERE I PRESENT
my own romantic way to think about technology. It includes cephalopod envy, “post symbolic communication,” and an idea of progress that is centered on enriching the depth of communication instead of the acquisition of powers. I believe that these ideas are only a few examples of many more awaiting discovery that will prove to be more seductive than cybernetic totalism.

The Evolutionary Strategy

Neoteny is an evolutionary strategy exhibited to varying degrees in different species, in which the characteristics of early development are drawn out and sustained into an individual organism’s chronological age.

For instance, humans exhibit neoteny more than horses. A newborn horse can stand on its own and already possesses many of the other skills of an adult horse. A human baby, by contrast, is more like a fetal horse. It is born without even the most basic abilities of an adult human, such as being able to move about.

Instead, these skills are learned during childhood. We smart mammals get that way by being dumber when we are born than our more instinctual cousins in the animal world. We enter the world essentially as fetuses in air. Neoteny opens a window to the world before our brains can be developed under the sole influence of instinct.

It is sometimes claimed that the level of neoteny in humans is not
fixed, that it has been rising over the course of human history. My purpose here isn’t to join in a debate about the semantics of nature and nurture. But I think it can certainly be said that neoteny is an immensely useful way of understanding the relationship between change in people and technology, and as with many aspects of our identity, we don’t know as much about the genetic component of neoteny as we surely will someday soon.

The phase of life we call “childhood” was greatly expanded in connection with the rise of literacy, because it takes time to learn to read. Illiterate children went to work in the fields as often as they were able, while those who learned to read spent time in an artificial, protected space called the classroom, an extended womb. It has even been claimed that the widespread acceptance of childhood as a familiar phase of human life only occurred in conjunction with the spread of the printing press.

Childhood becomes more innocent, protected, and concentrated with increased affluence. In part this is because there are fewer siblings to compete for the material booty and parental attention. An evolutionary psychologist might also argue that parents are motivated to become more “invested” in a child when there are fewer children to nurture.

With affluence comes extended childhood. It is a common observation that children enter the world of sexuality sooner than they used to, but that is only one side of the coin. Their sexuality also remains childlike for a longer period of time than it used to. The twenties are the new teens, and people in their thirties are often still dating, not having settled on a mate or made a decision about whether to have children or not.

BOOK: You are not a Gadget: A Manifesto
9.21Mb size Format: txt, pdf, ePub
ads

Other books

A Zombie Christmas Carol by Michael G. Thomas; Charles Dickens
A Countess by Chance by Kate McKinley
New Beginnings by Cheryl Douglas
Exiles of Forlorn by Sean T. Poindexter
The Kassa Gambit by M. C. Planck
Jupiter's Reef by Karl Kofoed
The Warrior Sheep Down Under by Christopher Russell