Read The Universal Sense Online
Authors: Seth Horowitz
But aside from being an interesting and useful model to study the transition from fetal to newborn hearing in humans, the story of tadpole hearing provides a hidden bonanza. Before a tadpole develops forelimbs, its brain has a rather fish-like connection pattern. It has an auditory nerve that carries signals from both the hearing and balance parts of its inner ear, and it has a lateral line system that sends signals through a separate pair of nerves projecting into a region of the hindbrain called the dorsal medulla. Here the nerves segregate into separate regions, some for processing balance and vibration, some for processing sound, quite a few that combine the two, and some for lateral line. All of these cross-connect in a regular pattern, which allows for comparison of signals from each side of the tadpole’s body. This helps it do things like figure out where sound is coming from (using a brain nucleus called the
superior olive
), maintain its position in the water, or even just startle and swim away from danger (which can include its own parents—adult bullfrogs are rather indiscriminate in their choice of prey). Many of these brain regions then feed forward to the auditory midbrain, called the
torus semi-circularis
, where auditory signals are recoded to handle complex sounds, and inputs from multiple sensory systems, including vision, are integrated to be sent forward to what passes for decision-making regions in the tadpole.
But when the tadpoles enter the deaf period, something changes. We still don’t know why, but at the same time that the new auditory pathway blocks the inner ear from receiving underwater sounds, the brain rapidly rewires itself. The medulla disconnects from the midbrain, the superior olive drops out of
the circuit, and the lateral line system begins dying entirely. There is a sudden upsurge in growth proteins, and brain regions begin moving around, reconnecting, and undergoing massive chemical shifts. About forty-eight hours later, as the new auditory pathway becomes complete and starts letting sound in again, the brain reconnects itself with a remarkably different configuration, one more suited to hearing in air. Within forty-eight hours, the tadpole basically rewires about a quarter of its brain.
This discovery was made in 1997, and since then some of my colleagues and I have worked on more than a dozen projects related to trying to understand how this transformation works, using everything from gene screening and sequencing to just watching a tadpole’s behavior as it transforms into a frog. We’ve confirmed that frogs are critically important to our understanding of hearing and the brain itself because of their incredible ability to reshape their brains—not just in the course of normal development but even after injury. A frog doesn’t just shift its brain around according to its developmental program; a frog’s brain
heals
.
Each year millions of people are deafened, paralyzed, blinded, or rendered mute, not because of damage to their limbs or ears or eyes or mouths but rather due to irreversible damage to their brain and spinal cord. While humans can regenerate peripheral nerves, central nervous system damage is usually permanent. Frogs, on the other hand, are able to overcome it. Studies by Harold Zakon and others demonstrated that, unlike humans, if a bullfrog suffers damage to its auditory nerve, which would lead to permanent deafness in humans, the bullfrog not only heals but re-forms appropriate connections to allow restoration of function. Loss of sensory hair cells in the inner ear, whether
due to injury or to exposure to certain types of antibiotics such as gentamicin, is typically permanent in humans, but not only do frogs regenerate after injury and drug exposure, there is evidence that they continually create some hair cells as old ones wear out. Somewhere along the evolutionary chain between amphibians and mammals, we lost the ability to heal our brains, our cranial nerves and much of our ears. So studying how a tadpole can completely rewire its brain in forty-eight hours and how a bullfrog can regrow its auditory nerve and restore function is not just basic research done for the sheer fun of doing science. These studies are likely to provide the clues that may allow us to create gene or pharmacological therapies that restore this ability to humans.
Chapter 4
The High-Frequency Club
When I was three years old, I went deaf. No maternal exposure to rubella, no overly vigorous toddleresque Q-tip exploration, just an unfortunate case of chicken pox that lesioned my eardrums. I don’t have any explicit memories of the incident, and my hearing returned, the only residual problem being that my eardrums were slightly scarred and thickened. But now I hear bats.
Most people avoid bats if they can. Even on those warm summer evenings when you are outside, most of your interaction with bats is probably limited to seeing their shadowed forms flittering about, the smaller ones saving you from mosquito bites and the larger ones saving your garden from junebugs. But I spent a great deal of time one-on-one with them in the lab, and it never failed to amaze me how an animal whose brain is the size of a peanut actually builds most of its world with sound, creating three-dimensional images from subtle shifts in echoes.
To most people, bats’ auditory world is so far outside of the human hearing experience that they seem like silent shadows.
But bats and humans share a lot of genetic heritage just on the basis of our being mammals. And as a mammal, you’re a member of an exclusive evolutionary club: the high-frequency club. If you’ve ever wished you could hear a dog whistle, console yourself with the knowledge that humans, like all other mammals, have a remarkably wide range of hearing. Non-mammalian vertebrate hearing is generally limited to an upper end of about 4–5 kHz (although some specialist birds such as owls and cave swiftlets can hear up to about 12–15 kHz). Of all the other vertebrates depending on sound to let them hunt, mate, define their territory, or avoid predators, we have the broadest hearing range, from the infrasonics of elephants to the 150 kHz natural ultrasound used by dolphins.
This isn’t because we’re more advanced or the new kids on the evolutionary block. Mammals are as old as the dinosaurs, but our ancestors at the time were rather mouse- or shrew-like, and likely to be listening sharply to avoid becoming someone else’s food. It’s just that our ears are more specialized than those of fish, frogs, reptiles or birds. And while ours shares basic features with all other vertebrate auditory systems, ours has two features that none of the others has, and both of which are critical to high frequency hearing—an outer ear and a cochlea.
Even among auditory scientists, there is a tendency to take the outer ear for granted.
13
For humans it’s something to hang a pair of glasses on or to make fun of if they stick out too far. And since an awful lot of our listening these days uses earbuds or earphones that sit inside or cover our external ear to let us hear more clearly in noisy environments such as airplanes or the gym, we
tend to ignore the outer ear as a vestigial organ that doesn’t do much (outside of featuring in a memorable scene in the movie
Reservoir Dogs
). But our outer ear is actually a fascinating evolutionary development that tells us a great deal not only about the environment in which mammals listen but also about what we listen to. And if you take your earbuds out and move your head around to listen, you can actually get an idea what the outer ear does.
The outer ear is basically a flattened cone of relatively stiff flesh ending at the entrance to the ear canal or
external auditory meatus
, the place past which the Q-tip shall not go. If you look at most sources that talk about its function, the pinna is described as a sound-gathering device for low-frequency sounds, increasing the volume of vibration-bearing space that can funnel sound into the ear canal and increasing the relative gain of sound by up to 20 dB. This alone would make it an impressive passive listening aid; remember that dB are logarithmic in nature and every 6 dB is a doubling of the sound pressure, so a 20 dB gain gives you a lot more sound to work with. (If you look at antique hearing aids from the nineteenth century—or old cartoons—you usually see what was called an ear trumpet, a device that looks like a long metal horn with the small end fitted into the ear. It was basically an enlarged prosthetic outer ear.) But this change in gain only applies to sounds below about 4–5 kHz, the range at which all other vertebrates hear quite well.
High-frequency sounds, on the other hand, get horribly attenuated in the air due in part to their short wavelength. They tend to get degraded into thermal noise over relatively short distances unless they are extremely loud. So to use them, you need something that gathers more sound than either a small hole in the side of the head or even an eardrum stuck out on the edge
of your skull, as with frogs. You need the equivalent of an audio telescope lens, something with a larger sound-gathering area. And not only do you need something to gather more sound, but you have to have some way of discriminating subtle changes in the sound based on the direction the sound is coming from, hopefully without having to swivel your head back and forth constantly to try to figure our where that high-pitched sound is coming from.
14
Think about what the outer ear actually looks like—look across the room at someone else’s ear or gently feel your own. It’s not just some simple cartoonish cone. It is an extremely individualized shape, replete with ridges and usually a small tab that points slightly forward just over the opening to the ear canal (this structure, the
tragus
, is usually much larger in smaller mammals). For higher-frequency sounds, particularly those above about 6–8 kHz for humans, the ridges and valleys act as little blockades that slightly reduce the amplitude of certain frequencies of sounds, creating one or more spectral notches. The pattern of notches in the spectrum of the sound is specific to the shape and position of your ear. This “pinna notch” helps you localize high-frequency sounds, especially in the vertical plane. A sound coming from above or below your head hits these ridges and the tragus at different angles, with the result that different frequencies are slightly blocked. You can check this out yourself, especially if you go outside on a summer day when there are cicadas or other loud insects around. If you hold your outer ears flat against the side of your head (including the little flap in front of the ear canal (without, of course, blocking the canal itself), you’ll find it’s
much harder to figure out if a sound is coming from above, below, or level with your head.
The pinna notch has another interesting function, one that to my knowledge has not been ever studied, though it’s rather obvious once you spend too many years thinking about things auditory. It just requires watching a mammal listen to a new sound that may be of interest to it—dogs are great ones to try this out on. Find your subject, be it a dog, a small child, a kitten, or a roomful of students, and say some nonsense words at it, but intone them as if you are saying something meaningful, like “Do you want a treat?” or “This will be on the final.” What do they do? They roll their heads slightly to one side or the other (humans tend to roll their heads to the left, I’ve found). I’ve never actually seen this written up in any experiments, but in the course of torturing friends, family, students, and pets in the name of auditory science, I’ve found that it’s pretty consistent—so much so that if you look at comics or cartoons of a mammal trying to figure out something it’s just heard, you’ll often see its head tilted to one side. By tilting the head this way, the listener shifts the position of the outer ear and changes both the timing and spectral properties of the sound, which allows the listener to hear it slightly differently when it gets repeated. It’s sort of an auditory equivalent of 3-D movie glasses—rather than seeing slightly visually shifted visual scenes, by tilting your head you hear the sound from a slightly different auditory position, which both gives you more information about where it’s coming from and lets your brain confirm what you heard.
Given that mammals have this specialization that would let them gather and tweak high-frequency sound, how do we act on it? The sensitivity of the middle and inner ear of other vertebrates
largely tops out at about 4–5 kHz, but we have an evolutionary adaptation of our inner ear, the cochlea, a snail-shell-shaped structure full of sensory hair cells connected to the outer world via the auditory bones of the middle ear and the eardrum. That description is the same general plan of every other vertebrate inner ear as well, but the cochlea is significantly more complex. In the cochlea, the tips of the hair cells are embedded in a
tectorial
or “ceiling-like” membrane, and the base of the cells is in a long, thin trapezoid-shaped membrane called the basilar membrane. The shape is important—the basal end near the oval window, closest to the outside world, is narrower and stiff, and hence vibrates the most in response to high-frequency sounds. The far or apical end is wider and looser, and more responsive to low-frequency sounds. This variable flexibility causes the hair cells in different regions to vibrate maximally in response to a particular frequency range.
Because of this arrangement, hair cells don’t have to fire in precise synchrony with the timing of the sound’s phase. Instead, the hair cells’ tuning is defined by their placement on the basilar membrane. Sound enters the fluid-filled chamber of the cochlea and creates a
traveling wave
with maximum deflection at the place on the basilar membrane corresponding to the particular frequency. This lets the sensory hair cells respond by
place coding
—it relieves the auditory nerve from the burden of trying to fire tens of thousands of times per second.
The traveling wave in the cochlea was discovered not by studying animals but rather by studying human cadavers, and won Georg von Békésy the Nobel Prize in Physiology or Medicine in 1961. The problem is, his theory turned out to be at least partially wrong, as it couldn’t explain how complex sounds would actually break up into their component frequencies as
they traveled through the cochlea. This illustrates a basic problem in anatomical science: dead preserved tissue doesn’t work the same way as living tissue, especially in the case of something as dynamic as hearing. Dead guys not only tell no tales, but they also don’t hear so well.