Authors: Sebastian Seung
An alternative criterion is similarity of neuropathologies, already being applied to evaluate mouse models of neurodegenerative disorders such as Alzheimer's disease (AD). In humans, AD is accompanied by abnormal buildup of plaques and tangles in the brain. Normal mice do not develop AD, but researchers have genetically engineered several mouse models that do. Their brains generate large numbers of plaques and tangles.
Researchers are still arguing about whether any of these models are good enough for studying AD. But at least they have a target: a clear and consistent neuropathology to emulate.
Along these lines, similarity of connectopathies might be a good criterion for animal models of disorders like autism and schizophrenia. Of course, for this to work we would have to identify connectopathies in animal models, as well as analogous ones in patients afflicted by autism and schizophrenia.
Â
You may have noticed that the plan for comparing connectomes sounds very different from the plan for decoding them. The connectionist theory of memory proposes particular hypothesesâthe cell assembly and the synaptic chainâthat can be tested using connectomics. In contrast, the connectopathy idea is open-ended. Without specific hypotheses, wouldn't searching for connectopathies be a wild-goose chase?
One of the leaders of the Human Genome Project, Eric Lander, has summed up the decade since its completion in this way: “The greatest impact of genomics has been the ability to investigate biological phenomena in a comprehensive, unbiased, hypothesis-free manner.”
It doesn't sound like what we were taught about the scientific method in school, where we learned that science proceeds in three steps:
(1) Formulate a hypothesis. (2) Make a prediction based on the hypothesis. (3) Perform an experiment to test the prediction.
Sometimes that procedure works. But for every success story, there are many more stories of failure caused by choosing the wrong hypothesis to investigate. It can take a lot of time and effort to test a hypothesis, which might turn out to be wrong orâeven worseâsimply irrelevant. In the latter case, it would lead to research that ends up being a complete waste of time. Unfortunately, there's no well-defined recipe for formulating hypotheses, beyond a stroke of insight or inspiration.
We do have an alternative to “hypothesis-driven,” or deductive, researchâthe “data-driven,” or inductive, approach. It too has three steps: (1) Collect a vast amount of data. (2) Analyze the data to detect patterns. (3) Use these patterns to formulate hypotheses.
Some scientists gravitate to one approach over the other, because it fits their personal style. But the two approaches are not really in opposition. The data-driven approach should be viewed as a way of generating hypotheses that are more likely to be worth exploring than ones based purely on intuition. It can be followed by hypothesis-driven research.
If we have the right technologies, we'll be in a position to apply this approach to mental disorders. Connectomics will provide more and more accurate and complete information about neural connectivity. With so much data available, we'll no longer have to search for our keys under the lamppost. Once we identify connectopathies, these will suggest good hypotheses about the causes of mental disorders that are worth exploring further.
To resort to another metaphor, searching for the causes of mental disorders is like looking for a needle in a haystack because the brain is so complex. How to succeed? One way is to start from a good hypothesis about the location of the needle. Then you need search only a small part of the haystack. This will work if you are lucky or smart enough to have a good hypothesis. Another way is to build a machine that rapidly sifts through all the material in the haystack. You are guaranteed to find the needle with this technology, even if you're not lucky or smart. This is analogous to the connectomic approach.
Â
To understand why minds differ, we have to see better how brains differ. That's why comparing connectomes is so crucial. Uncovering just any kind of difference won't be sufficient, however, since many differences could end up being uninteresting. We'll have to narrow in on the important ones, those that are
strongly
correlated with mental properties. These are the differences that will finally give connectionism more explanatory power than phrenology. They will accurately predict mental disorders for
individuals,
as well as faithfully estimate the intellectual abilities of normal people. (For connectomes obtained using microscopy on dead brains, the test would actually involve “postdiction,” guessing the mental disorders or abilities of the deceased from their brains.)
Identifying connectopathies will be an important step toward understanding certain mental disorders. But understanding goes only so far. Ideally we will capitalize on it by developing better treatments for these maladies, or even cures. In the next chapter I'll envision how this will be done.
In 1821 the composer Carl Maria von Weber premiered his opera
Der Freischütz.
To marry Agathe, the hero, Max, must impress her father by prevailing in a shooting contest. Driven to desperation by fear of losing his love, he sells his soul to the devil for seven magic bullets, which are guaranteed to hit their mark. Max not only wins the hand of Agathe but manages to evade the devil, and the opera ends happily.
In 1940 Warner Bros. released
Dr. Ehrlich's Magic Bullet,
which dramatized the life of the German physician and scientist Paul Ehrlich. After sharing a 1908 Nobel Prize for his discoveries about the immune system, Ehrlich didn't rest on his laurels. His institute discovered the first antisyphilis drugs, relieving the suffering of millions of people.
By creating the first man-made drugs for any disease, Ehrlich effectively invented the entire pharmaceutical industry. He was guided by his theory of the “magic bullet,” the name of which may have been inspired by Weber's popular opera.
Ehrlich first imaginedâand then discoveredâchemicals that killed bacteria but spared other cells, like a magic bullet that unerringly flew to its target.
The bullet metaphor illustrates two important principles that apply to all medical treatments, not only drugs. First, there should be a specific target, and second, the ideal intervention should selectively affect
only
that targetâthat is, avoid “side effects.” These principles aren't upheld by our remedies for brain disorders, which remain distressingly primitive. The surgeon's knife seems hopelessly crude for altering the brain's intricate structure, yet sometimes there is no other way. You've heard that neurosurgeons treat severe cases of epilepsy by removing the part of the brain where the seizures originate. But overzealous surgery can lead to catastrophe, as you saw in the case of H.M. To minimize side effects, it's important to target as small a region as possible.
Epilepsy surgery simply removes neurons from a connectome. Other procedures are intended to break the wires of neurons without killing them. In the first half of the twentieth century, surgeons attempted to treat psychosis by destroying the white matter connecting the frontal lobe to other parts of the brain. The infamous “frontal lobotomy” was eventually discredited and replaced by antipsychotic drugs. Yet psychosurgery is still practiced today as a last-ditch measure
when other therapies fail.
Before considering other types of interventions, I'd like to step back to imagine the ideal one. I've said that certain mental disorders might be caused by connectopathies. If that's the case, true cures would require establishing normal patterns of connectivity. You might regard this prospect as hopeless if you're a connectome determinist. But even if you're more optimistic, you can't deny that the complexity of the brain's structure is daunting. Merely seeing connectomes is difficult enough, and repairing them seems even harder. It's unclear how any of our technologies could be up to the challenge.
But the brain is naturally endowed with mechanisms for connectome changeâreweighting, reconnection, rewiring, and regenerationâthat are exquisitely controlled. Since genes and other molecules guide the four R's, they could serve as targets for drugs. I doubt you're surprised by the idea of the connectome as the target for medications, given that you've been reading this book. But you might wonder whether the idea is consistent with what you know from other sources.
According to well-known theories dating back to the 1960s, certain mental disorders are caused by surplus or deficiency of neurotransmitter,
which explains why they are relieved by drugs that alter neurotransmitter levels. Depression, for example, has been attributed to a dearth of serotonin, which is thought to be corrected by antidepressant medications such as fluoxetine, more commonly known as Prozac. (The drugs are supposed to increase serotonin levels by preventing neurons from sucking the molecule back up after secreting it. Recall that a number of such housekeeping mechanisms exist for keeping neurotransmitters from lingering in the synaptic cleft.)
But there is a problem with this theory. Fluoxetine affects serotonin levels immediately, yet it lifts mood only after several weeks. What could account for this long delay? According to one speculation, the serotonin boost causes other changes in the brain over the longer term. Perhaps it's these changes that relieve depression, but what exactly could they be? Neuroscientists have looked for effects of fluoxetine on the four R's,
and found that it increases the creation of new synapses, branches, and neurons in the hippocampus. Moreover, as I mentioned in the discussion of rewiring, fluoxetine restores ocular dominance plasticity in adults, possibly by stimulating cortical rewiring. This doesn't prove that the drug's antidepressant effects are caused by connectome change, but it has certainly opened the minds of neuroscientists to the idea.
In this chapter I will focus on the prospect of finding new drugs that specifically target connectomes
for the treatment of mental disorders. Let me emphasize, though, that other types of treatment are also important. Drugs may only increase the
potential
for change. To actually bring about positive changes, drugs could be supplemented by training regimens
that correct behaviors and thinking. This combination could direct the four R's to reshape connectomes for the better. In my opinion, the best way to change the brain is to help it change itself.
***
There's no doubt that drugs have greatly advanced the treatment of mental disorders. Antipsychotics treat the most dramatic symptoms of schizophrenia, the delusions and hallucinations. Antidepressants can enable the suicidal to lead normal lives. But current drugs have limitations. Can we find new ones that are even more effective?
Our most successful drugs are for infectious diseases. An antibiotic like penicillin cures infections, killing bacteria by punching holes in their outer membranes. A vaccine consists of molecules that make the immune system more vigilant against a bacteria or virus. In short, an antibiotic corrects infection, while a vaccine prevents it.
These two strategies also apply to brain disorders. Let's consider prevention first. During a stroke, most neurons remain alive but damaged,
and only later do they degenerate and die. Neuroscientists are working to find “neuroprotective” drugs that would minimize damage to neurons right after a stroke and thereby prevent death later on. The same strategy extends to diseases that destroy neurons for no apparent reason. For example, no one knows for sure why dopamine-secreting neurons degenerate and die in Parkinson's disease. Researchers hypothesize that the neurons are under some sort of stress, and would like to develop drugs that reduce it.
Some cases of Parkinson's disease are caused by defects in a gene that encodes a protein called parkin. An obvious therapy would be to replace the faulty gene. Researchers are attempting to do that by packaging a correct version inside a virus and injecting it into the brain, where they hope the virus will infect the dopamine-secreting neurons and protect them from degeneration. This “gene therapy” for Parkinson's
has been tried in rats and monkeys so far, but not yet in humans.
Death is just the last step in the degeneration of a neuron, which is generally a long drawn-out process. You might compare it to the slow decline of a person who starts out weak and is then hit by a cascading progression of ailments, each worse than the last. To find clues, researchers look carefully at the various stages of degeneration in neurons,
much as physicians observe the progression of symptoms in diseased patients.
Such observations are helpful because they narrow the search for molecular causes, the potential targets for neuroprotective drugs. In addition, they pinpoint the very first steps of degeneration. The timing is critical; intervening at the outset is likely to be more effective at preventing cell death later on. Early intervention is also important for treating cognitive impairments, which often emerge long before significant neuron death. These symptoms may occur because connections are lost
well before neurons actually die.
In general, it's important to see degeneration more clearly, and to see it at its earliest stage. The images acquired by the tools of connectomics will help us do that. Serial electron microscopy will reveal exactly how a neuron deteriorates. We will also obtain more precise information about which neuron types are affected and when. All this is bound to be helpful in the search for ways to prevent neurodegeneration.
Can we also find ways to prevent neurodevelopmental disorders? To do this, we must diagnose them as early as possible, before development has veered too far off course. Even while the fetus is still in the womb, genetic tests can be performed to predict whether disorders such as autism and schizophrenia are likely to emerge later on. But accurate predictions may require combining genetic testing with examination of the brain.
I argued earlier that microscopy of dead brains, with its high spatial resolution, will be necessary for determining whether a brain disorder is caused by a connectopathy. That method might yield good science, but by itself it will be useless for medical diagnosis. That being said, once a connectopathy has been fully characterized by microscopy of dead brains, it should become easier to use diffusion MRI to diagnose it in living brains. In general, it's easier to detect something if you know exactly what you are looking for.