Read What's That Pig Outdoors? Online
Authors: Henry Kisor
Another revolutionary electronic device that has made a large difference in my life as well as in the lives of almost all other deaf people is the closed-caption decoder. That's a small “black box” plugged between the antenna and the TV set which transforms encoded electronic signals from television transmitters into captions on the screen, much like subtitles on foreign films. (A federal law passed in 1990 requires the decoder circuitry to be built into all television sets with screens 13 inches and larger
sold in the United States after 1995.) Without the decoder, unneeded captions don't clutter up the sets of hearing people. Thus the system is called “closed” captioning.
The television networks send videotapes of their shows to captioning specialists such as the National Captioning Institute in Falls Church, Virginia. There hearing listeners transcribe the dialogue on the tapes and encode it into captions on magnetic discs. The discs are then sent to the networks, where the caption signals are inserted electronically into the television picture, then transmitted along with the normal audio and video portions of the program.
The closed captioner first appeared in 1980 and since then has become the greatest electronic entertainment aid the deaf have ever had. Before it came along, even expert lipreaders had a hard time puzzling out what the “talking heads” were saying on the two-dimensional television screen. Voice-over narration deepened the fog.
I could easily follow only sports events. Movies were difficult unless I'd read the books on which they were basedâif there were any. Talky British drawing-room dramas such as those showcased on Masterpiece Theatre might as well have been in Urdu for all I could understand of them. As for dramas and situation comedies, only shows with a lot of action were halfway comprehensible, and I'd have to fill in the gaps with my own imagination, inventing plot and dialogue to match the movements on-screen. The resulting “scenarios” often merely lampooned the genuine versions, as I would learn later, when the captioners came along. Often I'd prefer my fantasies to the real ones. That's no tribute to my imagination but a measure of the clay-footedness of American television programming.
Today almost all prime-time network shows are captioned, and PBS captions the majority of its evening programs. Though local news broadcasts are captioned in many cities, only in the last year have two Chicago stations captioned their news. Each year, however, the number of closedcaptioned shows grows.
The captions are not encoded verbatim. An actor might say, “Baby, how's about you and me making a little whoopee?” But the caption instead might read: “Baby, let's make love.” This foreshortening is done for two reasons. First, there might not be room on the screen to display a
long-winded bit of dialogue, even in successive fragments. Second, on the average deaf viewers read more slowly than people speak, so some shrinking of dialogue is necessary for it to be displayed on-screen long enough for the viewers to read it.
Many of the spices and subtleties of language thus are lost. In the early days of closed captioning, for example, the plummy locutions of upperclass British English gave way to Standard Blandspeak in captions prepared for BBC imports on American public television. As time has gone on, however, the captioning companies evidently have determined that deaf viewers who tune in to PBS shows tend to be better educated and more sophisticated than those who watch commercial network television, and thus the captions can follow the actual dialogue more closely.
Then there is live, or “real-time,” closed captioning, performed on the fly by human transcribers with the help of computers, much like court reporters. In this manner all three commercial networks live-caption their news, their coverage of presidential addresses, and even some sports events. Live captioning is still in the process of development, and at times can be primitiveâeven hilarious. “Pulitzer Prize” might be rendered “pullet surprise,” and one morning an ABC weatherman advised his viewers, according to the captions, to “drivel carefully.” And fresh names and concepts in the news often are garbled unrecognizably until the proper spellings can be fed into the computer's lexicon. There is also necessarily a delay between the spoken word and the appearance of the caption onscreen. The video might switch to the wreckage of a plane crash while the captions are still discussing a political event.
But I'm not complaining. No longer do I have to wait until the next day to read the text of a presidential address in
The New York Times.
I do, however, turn off the captioner during
Monday Night Football.
Who needs the inane, cliched chatter of pro football commentators? Silence can sometimes be a blessing.
Closed captions are also being recorded on films released in videocassette. Many recent movies are captioned at the same time they are released as videos, and more and more classic old films have been re-released with captions. These enable the deaf to participate in an important segment of popular culture that in the past largely had been lost to them. One Saturday not long ago I rented
Gone With the Wind.
I'd known about Rhett Butler's famous “Frankly, my dear, I don't give a damn,” but it was the first I'd
heard of Scarlett's “Fiddle-de-dee!”
A little-known side benefit of captioned videos is as a “language lab” for lipreaders. Watching the lips of the speakers at the instant the captions appear on-screen will sharpen the skills of any speechreader. It's not long before one begins to understand all the words the captions leave out. And it's excellent practice in accustoming oneself to non-American accents. After years of watching
Masterpiece Theatre
as well as scores of captioned British films, I've discovered that on trips to the United Kingdom I have much less trouble lipreading most Britons. Even Cockneys aren't the puzzle they used to be.
One final electronic device should be mentioned: the cochlear implant. A surgeon embeds a small round “receiver-stimulator” and a tiny array of electrodes in the mastoid bone and cochlea. A tiny microphone that looks like a behind-the-ear hearing aid picks up sound and passes it to a pocketsized computer worn on the body. The computer processes this sound, then transmits it to the electrodes, which stimulate the auditory nerve. This produces sensations that are interpreted by the brain as sounds.
How well? In some cases, enough so that the patient can use the telephone unaided. For most patients, however, the implant is of greatest benefit in giving them enough “hearing” to markedly improve their speech and lipreading, as well as enabling them to “hear” environmental sounds such as doorbells and automobile horns. There is great variation in the benefits of an implant, and much depends on how long the patient has been deaf as well as the age of onset of the deafness.
Not all deaf people are suitable candidates for implants. Among the requirements, as of the summer of 1990, were a profound loss of hearing in both ears; an inability to recognize speech with hearing aids; cochlea in suitable condition to accept the implanted electrodes; at least one functioning auditory nerve; and a willingness to work hard at the task of learning to “hear” all over again.
Cochlear implants in children are controversial in some parts of the deaf community. They are challenging the Food and Drug Administration's
approval of one company's implant for children. They contend that any improvement in speech comes from the education program that follows the implant, rather than the implant alone. They also believe that implanted children eventually will tire of the external apparatus, put it away in a drawer, and join the deaf culture.
The dispute apparently is rooted in the old battle between sign and oralism. Those who believe some deaf children can grow up to be competent speaking and lipreading participants in the hearing world view the cochlear implant as another excellent tool for helping them do so. Others consider such an idea an outright rejection of the sign-language-based culture, which they believe superior.
In the mid-1980s, I investigated the possibility of undergoing an implant. But I was turned down without even an interview, because I had lost my hearing at what the surgeons considered too early an age for the then-existing equipment, which sent signals to a single electrode. By 1990, however, the state of the art had advanced so that the implant consisted of as many as twenty-two electrodes, helping patients distinguish among a wider range of sounds. Middle-aged people like me, even elderly ones, who had lost their hearing as toddlers, were being successfully implanted. In the late fall of 1990 I again began to consider becoming a candidate for a twenty-two-channel implant.
Why? As a deaf person, I've always eagerly grasped whatever opportunity has come along to ease the tasks of living and working in a hearing world. If it turns out that I am physiologically not a good candidate for an implant, I'll just recall the counsel a wise audiology professor at Northwestern University offered me after I was turned down for a singlechannel implant. He pointed out that I was already light years ahead of dealing with my deafness compared with those who at the time were considered the best prospects for the implantâthose who had lost their hearing later in life, when adjustment to the loss was at its most difficult. “Why interfere with what already works?” he said sensibly.
Indeed, Dean Garstecki is a veritable model of sensibility. I met him several years ago, when I realized that my speech was again beginning to deteriorate and that I needed some brush-up therapy if I was to maintain it at a reasonable level of intelligibility. I wrote a note to a friend at
the Chicago Hearing Society, on whose board of directors I had briefly served a few years before. Could she put me in touch with someone who could help?
To my surprise she suggested my old battleground, the Institute of Language Disorders at Northwesternânow called the Department of Communication Sciences and Disorders. The prospect did not fill me with joy, but I reasoned that I had nothing to lose, and perhaps a different philosophy had displaced the irritating old paternalism, even arrogance, that I had experienced two and three decades before. Besides, private speech therapists cost a good deal of money.
Instantly Garstecki, head of the department's program in audiology and hearing impairment, lifted my concern. He wasn't interested in what went on in my head, he said. That was irrelevant. Clearly I'd made a good adjustment to life as a deaf person. Let's treat what obviously ails meâmy speech. And so, for nearly a decade, I have spent a quarter or two of every other academic year as a client at the institute, brushing up on my “s”s and “e”s and learning to put the brakes on so that I don't runmywordsalltogetherlikethis.
In the beginning I had hoped to improve my speech to a silver-tongued point, one that would allow me to orate before large groups of hearing strangers. From time to time I am invited to lecture on literary topics, a task that can be lucrative. Perhaps after hours a day of intensive training over many months, the quality of my speech could be raised to a level close to that of the normal hearing person. Retaining that quality, however, might require just as much laborâa game that might not be worth the candle.
I cannot hear myself speak; my lips, tongue, mouth, and larynx may seem to me to be going through the correct motions to produce intelligible speech, but their synchronization may be off ever so much, my tongue and teeth not quite in the right position with the precise tension required to produce a certain sound properly. The difference between the right and the wrong placement of the structures of the mouth is very, very subtle, and I cannot always tell the difference.
The only way I can tell with any consistency if my speech is understandable is to watch the reaction of the strangers to whom I speak. If they knit their brows, or gaze at me vacantly, we're not connecting. It's
when I have to repeat myself more and more to be understood that I know my speech is slipping, that it's time to go back to Northwestern for a brush-up.
Our sessions do not seek dramatic breakthroughs. They are simply intended to maintain my intelligibility at a level somewhere above 90 percentâthat is, a level at which a stranger talking to me for the first time would understand at least 90 percent of what I said. That might seem a modest goal, but it requires a lot of hard work and conscientious drilling at home. Once a week my therapist and I meet across a table in a small room for an hour. Our largest difficulty is finding some way for me to monitor the quality of my speech. We try, fail, try again, fail again, suddenly find the right placement, lose it, find it again, try to hang on to it, and sometimes succeed. Each session is a mixture of frustration, satisfaction, and sometimes elation.
“Hold that thought,” say my therapists as I leave each session. At the bus stop on the way to work each morning, I peer around to see whether anyone's in earshot, then run through my drills, warming up for the day. To sharpen my “e”s, the vowel with which I have the most troubleâit often comes out too lax, almost like an “uh”âI'll concoct sentences full of “e”s. “Evil babies eat eels from the sea” is one of my favorites. Waiting for the bus one day, I reeled off a dozen shapely and orotund “evil babies” sentences, exaggerating that “e” so that the proper sound would stay with me all day even when I wasn't thinking about it. Then I looked around and saw that another commuter had joined me. She stared warily and kept her distance.