Read The Singularity Is Near: When Humans Transcend Biology Online

Authors: Ray Kurzweil

Tags: #Non-Fiction, #Fringe Science, #Retail, #Technology, #Amazon.com

The Singularity Is Near: When Humans Transcend Biology (33 page)

BOOK: The Singularity Is Near: When Humans Transcend Biology
3.8Mb size Format: txt, pdf, ePub
ads
 

Reverse engineering the human brain:
Five parallel auditory pathways.
96

The Visual System

 

We’ve made enough progress in understanding the coding of visual information that experimental retina implants have been developed and surgically installed in patients.
97
However, because of the relative complexity of the visual system, our understanding of the processing of visual information lags behind our knowledge of the auditory regions. We have preliminary models of the transformations performed by two visual areas (called V1 and MT), although not at the individual neuron level. There are thirty-six other visual areas, and we will need to be able to scan these deeper regions at very high resolution or place precise sensors to ascertain their functions.

A pioneer in understanding visual processing is MIT’s Tomaso Poggio, who has distinguished its two tasks as identification and categorization.
98
The former is relatively easy to understand, according to Poggio, and we have already designed experimental and commercial systems that are reasonably successful in identifying faces.
99
These are used as part of security systems to control entry of personnel and in bank machines. Categorization—the ability to differentiate, for example, between a person and a car or between a dog and a cat—is a more complex matter, although recently progress has been made.
100

Early (in terms of evolution) layers of the visual system are largely a feedforward (lacking feedback) system in which increasingly sophisticated features are detected. Poggio and Maximilian Riesenhuber write that “single neurons in the macaque posterior inferotemporal cortex may be tuned to . . . a dictionary of thousands of complex shapes.” Evidence that visual recognition uses a feedforward system during recognition includes MEG studies that show the human visual system takes about 150 milliseconds to detect an object. This matches the latency of feature-detection cells in the inferotemporal cortex, so there does not appear to be time for feedback to play a role in these early decisions.

Recent experiments have used a hierarchical approach in which features are detected to be analyzed by later layers of the system.
101
From studies on macaque monkeys, neurons in the inferotemporal cortex appear to respond to complex features of objects on which the animals are trained. While most of the neurons respond only to a particular view of the object, some are able to respond regardless of perspective. Other research on the visual system of the macaque monkey includes studies on many specific types of cells, connectivity patterns, and high-level descriptions of information flow.
102

Extensive literature supports the use of what I call “hypothesis and test” in more complex pattern-recognition tasks. The cortex makes a guess about what it is seeing and then determines whether the features of what is actually in the field of view match its hypothesis.
103
We are often more focused on the hypothesis than the actual test, which explains why people often see and hear what they expect to perceive rather than what is actually there. “Hypothesis and test” is also a useful strategy in our computer-based pattern-recognition systems.

Although we have the illusion of receiving high-resolution images from our eyes, what the optic nerve actually sends to the brain is just outlines and clues about points of interest in our visual field. We then essentially hallucinate the world from cortical memories that interpret a series of extremely low-resolution movies that arrive in parallel channels. In a 2001 study published in
Nature
, Frank S. Werblin, professor of molecular and cell biology at the University of
California at Berkeley, and doctoral student Boton Roska, M.D., showed that the optic nerve carries ten to twelve output channels, each of which carries only minimal information about a given scene.
104
One group of what are called ganglion cells sends information only about edges (changes in contrast). Another group detects only large areas of uniform color, whereas a third group is sensitive only to the backgrounds behind figures of interest.

 

Seven of the dozen separate movies that the eye extracts from a scene and sends to the brain
.

“Even though we think we see the world so fully, what we are receiving is really just hints, edges in space and time,” says Werblin. “These 12 pictures of the world constitute all the information we will ever have about what’s out there, and from these 12 pictures, which are so sparse, we reconstruct the richness of the visual world. I’m curious how nature selected these 12 simple movies and how it can be that they are sufficient to provide us with all the information we seem to need.” Such findings promise to be a major advance in developing an artificial system that could replace the eye, retina, and early optic-nerve processing.

In
chapter 3
, I mentioned the work of robotics pioneer Hans Moravec, who has been reverse engineering the image processing done by the retina and early visual-processing regions in the brain. For more than thirty years Moravec has been constructing systems to emulate the ability of our visual system to build representations of the world. It has only been recently that sufficient processing
power has been available in microprocessors to replicate this human-level feature detection, and Moravec is applying his computer simulations to a new generation of robots that can navigate unplanned, complex environments with human-level vision.
105

Carver Mead has been pioneering the use of special neural chips that utilize transistors in their native analog mode, which can provide very efficient emulation of the analog nature of neural processing. Mead has demonstrated a chip that performs the functions of the retina and early transformations in the optic nerve using this approach.
106

A special type of visual recognition is detecting motion, one of the focus areas of the Max Planck Institute of Biology in Tübingen, Germany. The basic research model is simple: compare the signal at one receptor with a time-delayed signal at the adjacent receptor.
107
This model works for certain speeds but leads to the surprising result that above a certain speed, increases in the velocity of an observed object will decrease the response of this motion detector. Experimental results on animals (based on behavior and analysis of neuronal outputs) and humans (based on reported perceptions) have closely matched the model.

Other Works in Progress: An Artificial Hippocampus and an Artificial Olivocerebellar Region

 

The hippocampus is vital for learning new information and long-term storage of memories. Ted Berger and his colleagues at the University of Southern California mapped the signal patterns of this region by stimulating slices of rat hippocampus with electrical signals millions of times to determine which input produced a corresponding output.
108
They then developed a real-time mathematical model of the transformations performed by layers of the hippocampus and programmed the model onto a chip.
109
Their plan is to test the chip in animals by first disabling the corresponding hippocampus region, noting the resulting memory failure, and then determining whether that mental function can be restored by installing their hippocampal chip in place of the disabled region.

Ultimately, this approach could be used to replace the hippocampus in patients affected by strokes, epilepsy, or Alzheimer’s disease. The chip would be located on a patient’s skull, rather than inside the brain, and would communicate with the brain via two arrays of electrodes, placed on either side of the damaged hippocampal section. One would record the electrical activity coming from the rest of the brain, while the other would send the necessary instructions back to the brain.

Another brain region being modeled and simulated is the olivocerebellar region, which is responsible for balance and coordinating the movement of limbs. The goal of the international research group involved in this effort is to apply their artificial olivocerebellar circuit to military robots as well as to robots that could assist the disabled.
110
One of their reasons for selecting this particular brain region was that “it’s present in all vertebrates—it’s very much the same from the most simple to the most complex brains,” explains Rodolfo Llinas, one of the researchers and a neuroscientist at New York University Medical School. “The assumption is that it is conserved [in evolution] because it embodies a very intelligent solution. As the system is involved in motor coordination—and we want to have a machine that has sophisticated motor control—then the choice [of the circuit to mimic] was easy.”

One of the unique aspects of their simulator is that it uses analog circuits. Similar to Mead’s pioneering work on analog emulation of brain regions, the researchers found substantially greater performance with far fewer components by using transistors in their native analog mode.

One of the team’s researchers, Ferdinando Mussa-Ivaldi, a neuroscientist at Northwestern University, commented on the applications of an artificial olivocerebellar circuit for the disabled: “Think of a paralyzed patient. It is possible to imagine that many ordinary tasks—such as getting a glass of water, dressing, undressing, transferring to a wheelchair—could be carried out by robotic assistants, thus providing the patient with more independence.”

Understanding Higher-Level Functions: Imitation, Prediction, and Emotion

 

Operations of thought are like cavalry charges in a battle—they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.

                   —A
LFRED
N
ORTH
W
HITEHEAD

 

But the big feature of human-level intelligence is not what it does when it works but what it does when it’s stuck.

                   —M
ARVIN
M
INSKY

 

If love is the answer, could you please rephrase the question?

                   —L
ILY
T
OMLIN

 

Because it sits at the top of the neural hierarchy, the part of the brain least well understood is the cerebral cortex. This region, which consists of six thin layers
in the outermost areas of the cerebral hemispheres, contains billions of neurons. According to Thomas M. Bartol Jr. of the Computational Neurobiology Laboratory of the Salk Institute of Biological Studies,“A single cubic millimeter of cerebral cortex may contain on the order of 5 billion . . . synapses of different shapes and sizes.” The cortex is responsible for perception, planning, decision making and most of what we regard as conscious thinking.

Our ability to use language, another unique attribute of our species, appears to be located in this region. An intriguing hint about the origin of language and a key evolutionary change that enabled the formation of this distinguishing skill is the observation that only a few primates, including humans and monkeys, are able to use an (actual) mirror to master skills. Theorists Giacomo Rizzolatti and Michael Arbib hypothesized that language emerged from manual gestures (which monkeys—and, of course, humans—are capable of). Performing manual gestures requires the ability to mentally correlate the performance and observation of one’s own hand movements.
111
Their “mirror system hypothesis” is that the key to the evolution of language is a property called “parity,” which is the understanding that the gesture (or utterance) has the same meaning for the party making the gesture as for the party receiving it; that is, the understanding that what you see in a mirror is the same (although reversed left-to-right) as what is seen by someone else watching you. Other animals are unable to understand the image in a mirror in this fashion, and it is believed that they are missing this key ability to deploy parity.

A closely related concept is that the ability to imitate the movements (or, in the case of human babies, vocal sounds) of others is critical to developing language.
112
Imitation requires the ability to break down an observed presentation into parts, each of which can then be mastered through recursive and iterative refinement.

Recursion is the key capability identified in a new theory of linguistic competence. In Noam Chomsky’s early theories of language in humans, he cited many common attributes that account for the similarities in human languages. In a 2002 paper by Marc Hauser, Noam Chomsky, and Tecumseh Fitch, the authors cite the single attribution of “recursion” as accounting for the unique language faculty of the human species.
113
Recursion is the ability to put together small parts into a larger chunk, and then use that chunk as a part in yet another structure and to continue this process iteratively. In this way, we are able to build the elaborate structures of sentences and paragraphs from a limited set of words.

BOOK: The Singularity Is Near: When Humans Transcend Biology
3.8Mb size Format: txt, pdf, ePub
ads

Other books

Die Once More by Amy Plum
Death in July by Michael Joseph
Duke City Split by Max Austin
Marriage Matters by Ellingsen, Cynthia
Deathwing by David Pringle, Neil Jones, William King
Winter Siege by Ariana Franklin
In the Eye of the Storm by Samantha Chase
Wedding Rows by Kingsbury, Kate