Incognito (37 page)

Read Incognito Online

Authors: David Eagleman

BOOK: Incognito
2.39Mb size Format: txt, pdf, ePub

31
G. F. Miller, J. M. Tybur, and B. D. Jordan, “Ovulatory cycle effects on tip earnings by lap-dancers: Economic evidence for human estrus?”
Evolution and Human Behavior
, 28 (2007): 375–81.

32
Liberles and Buck, “A second class.” Because humans also carry the genes for this family of receptors, it’s the most promising road to sniff down when looking for a role for pheromones in humans.

33
Pearson, “Mouse data.”

34
C. Wedekind, T. Seebeck, F. Bettens, and A. J. Paepke, “MHC-dependent mate preferences in humans.”
Proceeding of the Royal Society of London Series B: Biological Sciences
260, no. 1359 (1995): 245–49.

35
Varendi and Porter, “Breast odour.”

36
Stern and McClintock, “Regulation of ovulation by human pheromones.” While it is widely believed that women living together will synchronize their menstrual cycles, it appears that this is not true. Careful studies of the original reports (and subsequent large-scale studies) show that statistical fluctuations can give the
perception
of synchrony but are nothing but chance occurrence. See Zhengwei and Schank, “Women do not synchronize.”

37
Moles, Kieffer, and D’Amato, “Deficit in attachment behavior.”

38
Lim, et al., “Enhanced partner preference.”

39
H. Walum, L. Westberg, S. Henningsson, J. M. Neiderhiser, D. Reiss, W. Igl, J. M. Ganiban, et al.,“Genetic variation in the vasopressin receptor 1a gene (AVPR1A) associates with pair-bonding behavior in humans.”
PNAS
105, no.37 (2008): 14153–56.

40
Winston,
Human Instinct
.

41
Fisher,
Anatomy of Love
.

Chapter 5. The Brain Is a Team of Rivals
 

  
1
See Marvin Minsky’s 1986 book
Society of Mind
.

  
2
Diamond,
Guns, Germs, and Steel
.

  
3
For a concrete illustration of the advantages and shortcomings of a “society” architecture, consider the concept of subsumption architecture, pioneered by the roboticist Rodney Brooks (Brooks, “A robust layered”). The basic unit of organization in the subsumption architecture is a module. Each module specializes in some independent, low-level task, such as controlling a sensor or actuator. The modules operate independently, each doing its own task. Each module has an input and an output signal. When the input of a module exceeds a predetermined threshold, the output of the module is activated. Inputs come from sensors or other modules. Each module also accepts a suppression signal and an inhibition signal. A suppression signal overrides the normal input signal. An inhibition signal causes output to be completely inhibited. These signals allow behaviors to override each other so that the system can produce coherent behavior. To produce coherent behavior, the modules are organized into layers. Each layer might implement a behavior, such as
wander
or
follow a moving object
. These layers are hierarchical: higher layers can inhibit the behavior of lower ones by inhibition or suppression. This gives each level its own rank of control. This architecture tightly couples perception and action, producing a highly reactive machine. But the downside is that all patterns of behavior in these systems are prewired. Subsumption agents are fast, but they depend entirely on the world to tell them what to do; they are purely reflexive. In part, subsumption agents have far-from-intelligent behavior because they lack an internal model of the world from which to make conclusions. Rodney Brooks claims this is an advantage: by lacking representation, the architecture avoids the time necessary to read, write, utilize, and maintain the world models. But somehow, human brains
do
put in the time, and have clever ways of doing it. I argue that human brains will be simulated only by moving beyond the assembly line of sequestered experts into the idea of a conflict-based democracy of mind, where multiple parties pitch in their votes on the same topics.

  
4
For example, this approach is used commonly in artificial neural networks: Jacobs, Jordan, Nowlan, and Hinton, “Adaptive mixtures.”

  
5
Minsky,
Society of Mind
.

  
6
Ingle, “Two visual systems,” discussed in a larger framework by Milner and Goodale,
The Visual Brain
.

  
7
For the importance of conflict in the brain, see Edelman,
Computing the Mind
. An optimal brain can be composed of conflicting agents; see Livnat and Pippenger, “An optimal brain”; Tversky and Shafir, “Choice under conflict”; Festinger,
Conflict, Decision, and Dissonance
. See also Cohen, “The vulcanization,” and McClure et al., “Conflict monitoring.”

  
8
Miller, “Personality,” as cited in Livnat and Pippenger, “An optimal brain.”

  
9
For a review of dual-process accounts, see Evans, “Dual-processing accounts.”

10
See Table 1 of ibid.

11
Freud,
Beyond the Pleasure Principle
(1920). The ideas of his three-part model of the psyche were expanded three years later in his
Das Ich und das Es
, available in Freud,
The Standard Edition
.

12
See, for example: Mesulam,
Principles of Behavioral and Cognitive neurology
; Elliott, Dolan, and Frith, “Dissociable functions”; and Faw, “Pre-frontal executive committee.” There are many subtleties of the neuroanatomy and debates within the field, but these details are not central to my argument and will therefore be relegated to these references.

13
Some authors have referred to these systems, dryly, as System 1 and System 2 processes (see, for example, Stanovich,
Who is rational?
or Kahneman and Frederick, “Representativeness revisited”). For our purposes, we use what we hope will be the most intuitive (if imperfect) use of emotional and rational systems. This choice is common in the field; see, for example, Cohen, “The vulcanization,” and McClure, et al., “Conflict monitoring.”

14
In this sense, emotional responses can be viewed as information processing—every bit as complex as a math problem but occupied with the internal world rather than the outside. The output of their
processing—brain states and bodily responses—can provide a simple plan of action for the organism to follow: do this, don’t do that.

15
Greene, et al., “The neural bases of cognitive conflict.”

16
See Niedenthal, “Embodying emotion,” and Haidt, “The new synthesis.”

17
Frederick, Loewenstein, and O’Donoghue, “Time discounting.”

18
McClure, Laibson, Loewenstein, and Cohen, “Separate neural systems.” Specifically, when choosing longer-term rewards with higher return, the lateral prefrontal and posterior parietal cortices were more active.

19
R. J. Shiller, “Infectious exuberance,”
Atlantic Monthly
, July/August 2008.

20
Freud, “The future of an illusion,” in
The Standard Edition
.

21
Illinois
Daily Republican
, Belvidere, IL, January 2, 1920.

22
Arlie R. Slabaugh,
Christmas Tokens and Medals
(Chicago: printed by Author, 1966), ANA Library Catalogue No. RM85.C5S5.

23
James Surowiecki, “Bitter money and christmas clubs,”
Forbes.com
, February 14, 2006.

24
Eagleman, “America on deadline.”

25
Thomas C. Schelling,
Choice and Consequence
(Cambridge, MA Harvard University Press, 1984); Ryan Spellecy, “Reviving Ulysses contracts,”
Kennedy Institute of Ethics Journal
13, no. 4 (2003): 373–92; Namita Puran, “Ulysses contracts: Bound to treatment or free to choose?”
York Scholar
2 (2005): 42–51.

26
There is no guarantee that the ethics boards accurately guess at the mental life of the future patient; then again, Ulysses contracts always suffer from imperfect knowledge of the future.

27
This phrase is borrowed from my colleague Jonathan Downar, who put it as “If you can’t rely on your own dorsolateral prefrontal cortex, borrow someone else’s.” As much as I love the original phrasing, I’ve simplified it for the present purposes.

28
For a detailed summary of decades of split-brain studies, see Tramo, et al., “Hemispheric Specialization.” For a lay-audience summary, see Michael Gazzaniga, “The split-brain revisited.”

29
Jaynes,
The Origin of Consciousness
.

30
See, for example, Rauch, Shin, and Phelps, “Neurocircuitry models.”
For an investigation of the relationship between fearful memories and the perception of time, see Stetson, Fiesta, and Eagleman, “Does time really … ?”

31
Here’s another aspect to consider about memory and the ceaseless reinvention hypothesis: neuroscientists do not think of memory as one phenomenon but, instead, as a collection of many different subtypes. On the broadest scale, there is short-term and long-term memory. Short-term involves remembering a phone number long enough to dial it. Within the long-term category there is declarative memory (for example, what you ate for breakfast and what year you got married) and nondeclarative memory (how to ride a bicycle); for an overview, see Eagleman and Montague, “Models of learning.” These divisions have been introduced because patients can sometimes damage one subtype without damaging others—an observation that has led neuroscientists into a hope of categorizing memory into several silos. But it is likely that the final picture of memory won’t divide so neatly into natural categories; instead, as per the theme of this chapter, different memory mechanisms will
overlap
in their domains. (See, for example, Poldrack and Packard, “Competition,” for a review of separable “cognitive” and “habit” memory systems that rely on the medial temporal lobe and basal ganglia, respectively.) Any circuit that contributes to memory, even a bit, will be strengthened and can make its contribution. If true, this will go some distance toward explaining an enduring mystery to young residents entering the neurology clinic: why do real patient cases only rarely match the textbook case descriptions? Textbooks assume neat categorization, while real brains ceaselessly reinvent overlapping strategies. As a result, real brains are robust—and they are also resistant to humancentric labeling.

32
For a review of different models of motion detection, see Clifford and Ibbotson, “Fundamental mechanisms.”

33
There are many examples of this inclusion of multiple solutions in modern neuroscience. Take, for instance, the motion aftereffect, mentioned in
Chapter 2
. If you stare at a waterfall for a minute or so, then look away at something else—say, the rocks on the side—it will look as though the stationary rocks are moving upward. This illusion
results from an adaptation of the system; essentially, the visual brain realizes that it is deriving little new information from all the downward motion, and it starts to adjust its internal parameters in the direction of canceling out the downwardness. As a result, something stationary now begins to look like it’s moving upward. For decades, scientists debated whether the adaptation happens at the level of the retina, at the early stages of the visual system, or at later stages of the visual system. Years of careful experiments have finally resolved this debate by dissolving it: there is no single answer to the question, because it is ill-posed. There is adaptation at many different levels in the visual system (Mather, Pavan, Campana, and Casco, “The motion aftereffect”). Some areas adapt quickly, some slowly, others at speeds in between. This strategy allows some parts of the brain to sensitively follow changes in the incoming data stream, while others will not change their stubborn ways without lasting evidence. Returning to the issue of memory discussed above, it is also theorized that Mother Nature has found several methods to store memories at several different time scales, and it is the interaction of all these time scales that makes older memories more stable than young memories. The fact that older memories are more stable is known as Ribot’s law. For more on the idea of exploiting different time scales of plasticity, see Fusi, Drew, and Abbott, “Cascade models.”

34
In a wider biological context, the team-of-rivals framework accords well with the idea that the brain is a Darwinian system, one in which stimuli from the outside world happen to resonate with certain random patterns of neural circuitry, and not with others. Those circuits that happen to respond to stimuli in the outside world are strengthened, and other random circuits continue to drift around until they find something to resonate with. If they never find anything to “excite” them, they die off. To phrase it from the opposite direction, stimuli in the outside world “pick out” circuits in the brain: they happen to interact with some circuits and not others. The team-of-rivals framework is nicely compatible with neural Darwinism, and emphasizes that Darwinian selection of neural circuitry will tend to strengthen
multiple
circuits—of very different provenance—all of which happen to resonate
with a stimulus or task. These circuits are the multiple factions in the brain’s congress. For views on the brain as a Darwinian system, see Gerald Edelman,
Neural Darwinism
; Calvin,
How Brains Think
; Dennett,
Consciousness Explained
; or Hayek,
The Sensory Order
.

Other books

CRUISE TO ROMANCE by Poznanski, Toby
You're Not Broken by Hart, Gemma
Captive Heart by Phoenix Sullivan
Snapper by Brian Kimberling
The Rebels of Cordovia by Linda Weaver Clarke
Damage by Robin Stevenson
Unknown by Unknown
Genesis by Jim Crace