Read Iconoclast: A Neuroscientist Reveals How to Think Differently Online

Authors: Gregory Berns Ph.d.

Tags: #Industrial & Organizational Psychology, #Creative Ability, #Management, #Neuropsychology, #Religion, #Medical, #Behavior - Physiology, #General, #Thinking - Physiology, #Psychophysiology - Methods, #Risk-Taking, #Neuroscience, #Psychology; Industrial, #Fear, #Perception - Physiology, #Iconoclasm, #Business & Economics, #Psychology

Iconoclast: A Neuroscientist Reveals How to Think Differently (7 page)

BOOK: Iconoclast: A Neuroscientist Reveals How to Think Differently
2.79Mb size Format: txt, pdf, ePub
ads

On the issue of war casualties, Nightingale learned to change her perception of death by her experiences during the Crimean War. Conventional wisdom said that soldiers died from their wounds, and so their treatment should be aimed at their injuries. Nightingale went against this dogma and showed how it was disease that killed soldiers. During the winter of 1854, Nightingale and a staff of thirty-eight women volunteered to staff the British barracks near Constantinople. Far from being effective, she watched helplessly as the death rate soared. Instead of dying from their wounds, soldiers were dying of highly communicable diseases such as typhoid and cholera. Initially, Nightingale believed these deaths were due to poor nutrition, which was the prevailing explanation. Indeed, the soldiers were malnourished. But because so many men were dying, the military ordered an investigation, during which Nightingale learned to see the deaths in a different light.

In the spring, the makeshift sewers in the barracks were flushed out, and the death rate began to fall. This was a key event for Nightingale that caused her to change her perception of what was killing people. She began to systematically collect information for the investigation on the causes of death and their relationship to injuries, nutrition, and hygiene. It was her mathematical prowess, however, that led to the culminating shift in perception for which she is famous. In a pioneering letter to Queen Victoria, Nightingale used a novel form of data presentation, a polar diagram, similar to a pie chart. Nightingale graphically demonstrated just how many men were dying of diseases stemming from poor hygiene and when they were dying. Iconoclastic in form, this may have been the first practical use of this form of chart that led to a wholesale change in the way that patients were cared for. The graph also illustrates how simply taking information and presenting it in a new visual configuration is an effective way to change one’s perception of cause and effect.
Prior to this graph, the military assumed that its soldiers were dying from battle-related complications. This was a natural result of military leaders’ experience in battle. But they had little experience with medical care. Nightingale shattered this dogma by taking her experience, which was fundamentally different from the generals’, and instantly conveyed it in a visual form. Because of her experience, she learned to see medical care differently and, in turn, was able to teach others to see the way she did.

How the Brain Learns to See

 

Entire books have been written about learning, but the important elements for iconoclasts can be boiled down to this: experience modifies the connections between neurons such that they become more efficient at processing information. Traditionally, psychologists and neuroscientists have divided learning into two broad categories. The first category was discovered by Pavlov in his famous dog experiments, and this is known as
classical conditioning
, also called
associative learning
. When a dog sees its owner reaching for the bag of food, it becomes excited and starts wagging its tail. The dog does this not necessarily because it is happy, but because it has learned to associate the bag of food with what will follow. To the owner’s eyes, the dog may appear happy, but this is really just a projection of a human emotion onto his pet. We have no way of knowing the subjective experience of the dog itself. Interestingly, the dog’s behavior causes associative learning to occur in its owner’s brain too. When he reaches for the bag of food, the owner knows exactly how the dog is going to behave, and because humans find canine attention so enjoyable, the dog reinforces feeding behavior in its owner without even knowing it.

When we look in the brain to see what happens to neurons during classical conditioning, we find analogous changes at the neuronal level. In a now classic experiment, Wolfram Schultz, a Swiss neuroscientist,
measured the firing rates of monkeys’ dopamine neurons while they underwent classical conditioning. We will go into dopamine more deeply when talking about risk, but for now, it is important to note that dopamine is a neurotransmitter synthesized by a very small group of neurons in the brain stem (less than 1 percent of all the neurons in the brain). From about 1950, when dopamine was discovered, until about 1990, scientists thought that dopamine served as the pleasure chemical of the brain. This was a natural conclusion because dopamine is released to all the things that people and animals find pleasurable, including food, water, sex, and drugs. Schultz, however, was interested in how dopamine facilitated associative learning like the type that Pavlov did with his dogs. Schultz trained rhesus monkeys to observe a light. When the light turned on, they received a small drop of juice on their tongues. Before the monkeys learned the association between the light and the juice, Schultz observed that the dopamine neurons fired in response to the juice itself—a finding consistent with the pleasure hypothesis of dopamine. After a brief period of training, however, the monkeys quickly learned to associate the light with the juice, and, interestingly, Schultz observed that the dopamine neurons stopped firing to the juice and began firing to the light. These findings illustrated that like every other neural system in the brain, the dopamine system adapted to environmental contingencies and essentially learned the correlation between arbitrary events such as lights flashing and behaviorally salient outcomes such as fruit juice.
5

The same learning process occurs in the perceptual system. When the brain is repeatedly presented with the same visual stimuli, the neurons in the visual system continue to respond, but with decreasing vigor. The phenomenon has been called
repetition suppression
, and it has been observed both at the local processing level in V1 and at the object processing level in both the high and the low roads. In the Georgetown experiment, the researchers took advantage of this adaptation to identify which brain regions distinguish car types before and
after training. But repetition suppression is important in its own right because it demonstrates the brain’s efficiency principle in action. As a rough rule, the brain responds in a linearly decreasing fashion to subsequent observations of an object such that after six to eight observations, the normal response is about one-half of its original level. Of course, a variety of factors mediate this effect, including the time between repetitions and what other intervening events occur, but the general observation holds: repetition leads to smaller neural responses.

There are three competing theories for repetition suppression. The first possibility is that neurons become fatigued like muscles and do not respond as strongly. The second possibility is that neurons become primed to stimuli and respond faster with repetition, which might appear as decreased activity, depending on how one makes the measurement. The final, and most likely, explanation for repetition suppression is the
sharpening hypothesis
. Because the brain does not generally rely on grandmother cells, the vast majority of cognitive and perceptual functions are carried out by networks of neurons. When these networks repeatedly process the same stimulus, neuroscientists have observed that neurons within these networks become more specialized in their activity. So while on initial presentation, the entire network might process a stimulus, by about the sixth presentation, the heavy lifting is being performed by only a subset of neurons within this network. Because fewer neurons are being used, the network becomes more energy efficient in carrying out its function, and we observe this as a decrease in neural activity.
6

Looking deeper into the process of repetition suppression, we find changes occurring at the molecular level of synapses themselves. These changes occur on different timescales, ranging from milliseconds to days or even years. At the very shortest timescales, neurons that repeatedly fire will eventually deplete critical ions such as potassium and calcium. On a slightly longer timescale of a few seconds, neurons might run out of neurotransmitters, such as dopamine, leading to a
phenomenon called
synaptic depression
. But what is really interesting is what happens over the long haul. These temporary depletions of ions and neurotransmitters are believed to lead to an adaptation within neurons themselves called
long-term potentiation
and
long-term depression
. The end result is that neurons adapt by turning on and off genes that control their function. These genes might lead to sprouting of new synapses and pruning of old ones that are nonfunctional.

From Visual Imagery to Imagination

 

The brain’s reliance on distributed processing goes well beyond simple fail-safe features that prevent you from forgetting your grandmother. Distributed processing means that the brain can also construct images when no information is coming from the eyes. This is a process called
mental imagery
, and it has a close relationship to imagination.

The link between perception, imagery, and imagination has been debated for decades, but only in the last decade has neuroscience revealed where imagination comes from in the brain. The old view of these functions was that visual perception was a one-way street toward the frontal cortex, where the heavy lifting of imagination and cognition was performed. Recent experiments, however, have shown that the visual cortex and its immediate neighbors in the parietal and temporal lobes play integral roles in mental imagery.

The process of mentally visualizing an image is much like running the perceptual process in reverse.
7
The structures used to visualize something are the same as those that process something when you actually see it. Even more amazing, the strength of activity in the visual cortex correlates with the intensity and vividness of what the person visualizes.
8
The stronger the activity, the more vivid the scene a person imagines.

Unfortunately the efficiency principle works against imagination. As an example, close your eyes and visualize the sun setting over a beach.

How detailed was your image? Did you envision a bland orb sinking below calm waters, or did you call up an image filled with activity—palm trees swaying gently, waves lapping at your feet, perhaps a loved one holding your hand? How different was your imaginary beach from a postcard image? The problem is that the sun setting over a beach is an iconic image itself, and most people imagine exactly that. Whether it is through personal experience or simply seeing the sun set over enough Pacific oceans courtesy of Hollywood, there is a striking lack of imagination in this sort of visualization task. The brain simply takes the path of least resistance and reactivates neurons that have been optimized to process this sort of scene.

If you imagine something less common, perhaps something that you have never actually seen, the possibilities for creative thinking become much greater because the brain can no longer rely on connections that have already been shaped by past experience. For example, instead of imagining the sun setting over a beach, imagine you are standing on the surface of Pluto. What would a sunset look like from there? Notice how hard you have to work to imagine this scene. Do you picture a featureless ball of ice with the sun a speck of light barely brighter than a star glimmering along the horizon? Do you envision frozen lakes of exotic chemicals, or do you picture fjords of ice glimmering in the starlight? The possibilities seem much more open than with its terrestrial counterpart. In large part, this is because nobody has ever seen a sunset on Pluto, and you really have to work new neural pathways to imagine it.

These imagery tasks also reveal a key psychological factor in imagination. To imagine something in detail, you must devote a significant amount of mental energy to the task. More precisely, mental energy refers to the ability to direct and sustain attention for the job at hand. The issue of attention has captivated psychologists and neuroscientists for centuries, in large part because of its ineffable quality and the close association of the human sense of self with attention. We feel that we own attention; that it is a matter of free will and individual choice to
direct our attention when, and where, we please. Sometimes attention feels like the conductor in the brain, orchestrating the disparate circuits to play their parts at the right times and be quiet when not needed. The modern, neurobiological view tells us something quite a bit different about attention than this Wizard of Oz fantasy, and it also tells us about perception and imagination.

William James, the nineteenth-century psychologist, said this about attention: “Everyone knows what attention is. It is the taking possession by the mind in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought … It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state.”
9

Quite right. Everyone does know what attention is, but that doesn’t mean it has been easy to pin down scientifically. To sort through this field, it is helpful to divide attention into two broad categories based on how long the process operates.
Sustained attention
, as the name suggests, acts over extended periods of time and is closely related to drive and motivation, a topic to which I will return later.
Selective attention
is transient and detail oriented. This is the form of attention to which James referred, and because of its transient nature, has been the preferred form to study scientifically.

Details, which on casual observation go unnoticed, become revealed only under the powers of selective attention. And because of this,
attention changes perception
. But where in the brain does attention come from? In one of the first brain imaging experiments to try to answer the question of where attention comes from, researchers at University College, London, presented subjects with visual cues that directed their attention to either the left or the right side of a computer screen.
10
When they compared this with the condition in which no cue was presented, the researchers found attention was associated with increased activity in both prefrontal and parietal areas. Subsequent studies narrowed down
this result and distinguished between attention that is directed by external cues, as in the computer experiment, and attention that is directed internally by the person herself. Internal attention seems to depend critically on a subregion of the prefrontal cortex called the dorsolateral prefrontal cortex (DLPFC). When it comes to directing attention to locations in space, the right DLPFC plays a greater role than the left.

BOOK: Iconoclast: A Neuroscientist Reveals How to Think Differently
2.79Mb size Format: txt, pdf, ePub
ads

Other books

Husband by the Hour by Susan Mallery
Amplify by Anne Mercier
The Veil by K. T. Richey
Broken Glass by Arianne Richmonde
Song of the Silk Road by Mingmei Yip
Hetty by Charles Slack