Read B0041VYHGW EBOK Online

Authors: David Bordwell,Kristin Thompson

B0041VYHGW EBOK (114 page)

BOOK: B0041VYHGW EBOK
5.42Mb size Format: txt, pdf, ePub
ads

Pitch can also serve more specific purposes. When a young boy tries to speak in a man’s deep voice and fails, as in
How Green Was My Valley,
the joke is based primarily on pitch. Marlene Dietrich’s vocal delivery often depends on a long upward-gliding intonation that makes a statement sound like a question. In the coronation scene of
Ivan the Terrible,
Part I
, a court singer with a deep bass voice begins a song of praise to Ivan, and each phrase rises dramatically in pitch
(
7.7

7.9
).
When Bernard Herrmann obtained the effects of shrill, birdlike shrieking in Hitchcock’s
Psycho,
even many musicians could not recognize the source: violins played at extraordinarily high pitch.

 

7.7 In
Ivan the Terrible,
Eisenstein emphasizes changes in vocal pitch by cutting from a medium-long shot …

 
 

 

7.8 … to a medium shot …

 
 

 

7.9 … to a close-up of the singer.

 
 

When Julianne Moore was planning her performance as the protagonist of Todd Haynes’s
Safe,
she took pitch and other vocal qualities into account:

My first key to her was her voice, her vocal patterns. I started with a very typical Southern California speech pattern. It’s almost a sing-song rhythm, you know—it’s referred to as the “Valley quality” that travelled across the country and became a universal American vocal pattern. It was important to me that her voice would have that kind of melody to it. And then I would put question marks at the end of the sentence all the time—that way she never makes a statement; it makes her very unsure and very undefined. I also went above my own chords, because I wanted the sensation of her voice not being connected at all to her body—that’s why her voice is so high. This is someone who’s completely disconnected from any kind of physicality, from any sense of being herself, from really knowing herself. In that sense, I guess the vocal choices are somewhat metaphorical.

 
Timbre

The harmonic components of sound give it a certain color, or tone quality—what musicians call
timbre.
When we call someone’s voice nasal or a certain musical tone mellow, we’re referring to timbre. Timbre is actually a less fundamental acoustic parameter than amplitude or frequency, but it’s indispensable in describing the texture or “feel” of a sound. In everyday life, the recognition of a familiar sound is largely a matter of various aspects of timbre.

Filmmakers manipulate timbre continually. Timbre can help articulate portions of the sound track, as when it differentiates musical instruments from one another. Timbre also comes forward on certain occasions, as in the clichéd use of oleaginous saxophone tones behind seduction scenes. More subtly, in the opening sequence of Rouben Mamoulian’s
Love Me Tonight,
people starting the day on a street pass a musical rhythm from object to object—a broom, a carpet beater—and the humor of the number springs in part from the very different timbres of the objects. In preparing the sound track for Peter Weir’s
Witness,
the editors drew on sounds recorded 20 or more years before, so that the less modern timbre of the older recordings would evoke the rustic seclusion of the Amish community.

Loudness, pitch, and timbre interact to define the overall sonic texture of a film. For example, these qualities enable us to recognize different characters’ voices. Both John Wayne and James Stewart speak slowly, but Wayne’s voice tends to be deeper and gruffer than Stewart’s querulous drawl. This difference works to great advantage in
The Man Who Shot Liberty Valance,
where their characters are sharply contrasted. In
The Wizard of Oz,
the disparity between the public image of the Wizard and the old charlatan who rigs it up is marked by the booming bass of the effigy and the old man’s higher, softer, more quavering voice.

Loudness, pitch, and timbre also shape our experience of a film as a whole.
Citizen Kane,
for example, offers a wide range of sound manipulations. Echo chambers alter timbre and volume. A motif is formed by the inability of Kane’s wife, Susan, to sing pitches accurately. Moreover, in
Citizen Kane,
the plot’s shifts between times and places are covered by continuing a sound thread and varying the basic acoustics. A shot of Kane applauding dissolves to a shot of a crowd applauding (a shift in volume and timbre). Leland beginning a sentence in the street cuts to Kane finishing the sentence in an auditorium, his voice magnified by loudspeakers (a shift in volume, timbre, and pitch).

“The Empire spaceship sounded a certain way as compared to the Imperial fleet; that was a deliberate style change. Everybody in the Empire had shrieking, howling, ghostlike, frightening sounds…. You hear it—you jump with fear. Whereas the rebel forces had more junky-sounding planes and spaceships. They weren’t quite as powerful; they tended to pop and sputter more.”

— Ben Burtt, sound editor,
Star Wars

 

Recent noise reduction techniques, multitrack reproduction, and digital sound yield wider ranges of frequency and volume, as well as crisper timbres than film-makers could achieve in the studio years. Today sound editors can individualize voice or noise to a surprising degree. For
The Thin Red Line,
every character’s distinctive breathing sounds were recorded for use as ambient noise. Randy Thoms, sound designer for
Cast Away,
sought to characterize different sorts of wind—breezes from the open sea, winds in a cave. Sound even announces a shift in wind direction crucial to one of the hero’s plans. “We can use the wind in a very musical way,” Thoms notes.

Selection, Alteration, and Combination

Sound in the cinema is of three types: speech, music, and noise (also called
sound effects
). Occasionally, a sound may cross categories—Is a scream speech or noise? Is electronic music also noise?—and filmmakers have freely exploited these ambiguities. In
Psycho,
when a woman screams, we expect to hear a human voice and instead hear screaming violins. Nevertheless, in most cases, the distinctions hold. Now that we have an idea of some basic acoustic properties, how are speech, music, and noise selected and combined for specific purposes?

“Too many films seem essentially designed to be heard in the mixing studios. I always fight against recording every single footstep, and would rather lose the sound of people settling into armchairs, etc., and fade out a particular atmosphere sound once the emotional impact has been achieved, even at the cost of realism. You have to know how to play with silence, to treat sound like music.”

— Bernard Tavernier, director

 
Choosing and Manipulating Sounds

The creation of the sound track resembles the editing of the image track. Just as the filmmaker may pick the best image from several shots, he or she may choose what exact bit of sound will best serve the purpose. Just as footage from disparate sources may be blended into a single visual track, so too sound that was not recorded during filming may be added freely. Moreover, a shot may be rephotographed or tinted in color or jigsawed into a composite image, and a bit of sound be processed to change its acoustic qualities. And just as the filmmaker may link or superimpose images, so may he or she join any two sounds end to end or place one over another. Though we aren’t usually as aware of sonic manipulations, the sound track demands as much choice and control as does the visual track.

Sometimes the sound track is conceived before the image track. Studio-made animated cartoons typically record music, dialogue, and sound effects before the images are filmed, so that the figures may be synchronized with the sound frame by frame. For many years, Carl Stalling created frantically paced jumbles of familiar tunes, weird noises, and distinctive voices for the adventures of Bugs Bunny and Daffy Duck. Experimental films also frequently build their images around a preexisting sound track. Some filmmakers have even argued that abstract cinema is a sort of “visual music” and have tried to create a synthesis of the two media.

Not all the sounds we hear in a film are generated specifically for that project. Editors tend to build their own collections of sounds that intrigue them, but sometimes they reuse music or effects stored in sound libraries. Most famous is the “Wilhelm scream,” first heard in a 1951 American film when an alligator bites off a cowboy’s arm. The scream was recycled in
Star Wars, Raiders of the Lost Ark, Reservoir Dogs, Transformers,
and over a hundred other films.

As with other film techniques, sound guides the viewer’s attention. Normally, the sound track is clarified and simplified so that important material stands out. Dialogue, as a transmitter of story information, is usually recorded and reproduced for maximum clarity. Important lines should not have to compete with music or background noise. Sound effects are usually less important. They supply an overall sense of a realistic environment and are seldom noticed; if they were missing, however, the silence would be distracting. Music is usually subordinate to dialogue as well, entering during pauses in conversation or effects.

Dialogue doesn’t always rank highest in importance, though. Sound effects are usually central to action sequences, while music can dominate dance scenes, transitional sequences, or emotion-laden moments without dialogue. And some filmmakers have shifted the weight conventionally assigned to each type of sound. Charlie Chaplin’s
City Lights
and
Modern Times
eliminate dialogue, letting sound effects and music come to the fore. The films of Jacques Tati and Jean-Marie Straub retain dialogue but still place great emphasis on sound effects. In Robert Bresson’s
A Man Escaped,
music and noise fill out a sparse dialogue track by evoking offscreen space and creating thematic associations.

“We were going for a documentary feel. We came up with a way for the loop group actors to say lines in a way we called ‘nondescript dialogue.’ They said lines, but they didn’t say the actual words. If you put it behind people speaking, you just think it’s people talking offscreen, but your ear isn’t drawn to it. It would just lie there as a bed, and you can play it relatively loudly and it just fits in with the scenes.”

— Hugh Waddell, ADR supervisor, on
The Thin Red Line

 

In creating a sound track, then, the filmmaker must select sounds that will fulfill a particular function. In order to do this, the filmmaker usually will provide a clearer, simpler sound world than that of everyday life. Normally, our perception filters out irrelevant stimuli and retains what is most useful at a particular moment. As you read this, you are attending to words on the page and (to various degrees) ignoring certain stimuli that reach your ears. But if you close your eyes and listen attentively to the sounds around you, you will become aware of many previously unnoticed sounds—traffic, footsteps, distant voices. Any amateur recordist knows that if you set up a microphone and recorder in what seems to be a quiet environment, those normally unnoticed sounds suddenly become obtrusive. The microphone is unselective; like the camera lens, it doesn’t automatically filter out what is distracting. Sound studios, camera blimps to absorb motor noise, directional and shielded microphones, sound engineering and editing, and libraries of stock sounds—all allow the filmmaker to choose exactly what the sound track requires.

BOOK: B0041VYHGW EBOK
5.42Mb size Format: txt, pdf, ePub
ads

Other books

The Old Boys by Charles McCarry
The Sleepwalkers by Christopher Clark
Footloose by Paramount Pictures Corporation
Alex Van Helsing by Jason Henderson
Texas Wildcat by Lindsay McKenna
A Kingdom of Dreams by Judith McNaught
Duplicity by Kristina M Sanchez