Authors: David Byrne
Tags: #Science, #History, #Non-Fiction, #Music, #Art
Traditional Chinese music and American folk music usually employs five
notes selected from among those twelve to create their scales. Arabic music
works within these parameters too. Western classical music uses seven of the twelve available notes (the eighth note of the Western scale is the octave).
In 1921, the composer Arnold Schoenberg proposed a system that would
“democratize” musical composition. In this twelve-tone music, no note is
considered to be more important than any other. That does indeed seem like
a fair and democratic approach, yet people often call music using that system dissonant, difficult, and abrasive. Dissonant sounds can be moving—either
used for creepy effect or employed to evoke cosmic or dark forces as in the
works of Messiaen (his
Quartet for the End of Time
) or Ligeti (his composition
Atmospheres
is used in the trippy stargate sequence of the movie
2001
).
But generally these twelve-tone acts of musical liberation were not all that popular, and neither was free jazz, the improvisational equivalent pioneered by Ornette Coleman and John Coltrane in his later years. This “liberation”
became, for many composers, a dogma—just a new, fancier kind of prison.
Very few cultures use all twelve available notes. Most adhere to the
usual harmonies and scales, but there are some notable exceptions. Javanese
gamelan music, produced mainly by orchestras consisting of groups of gong-
like instruments, often have scales of five notes, but the five notes are more or less evenly spread between the octave notes. The intervals between the
notes are different than a five-note Chinese or folk-music scale. It is sur-
mised that one reason for this is that gongs produce odd, unharmonic reso-
nances and overtones, and to make those aspects of the notes sound pleasant
when played together, the Javanese adjusted their scales to account for the
unpleasantly interacting harmonics.
314 | HOW MUSIC WORKS
Harmonics are the incidental notes that most instruments produce above
and below the principal (or “fundamental”) note being played. These “ghost”
notes are quieter than the main tone, and their number and variety are what
gives each instrument its characteristic sound. The harmonics of a clarinet
(whose vibrations result from a reed and a column of air) are different from those of a violin (whose vibrations result from a vibrating string). Hermann von Helmholtz, the eighteenth-century German physicist, proposed that it
is qualities inherent in these harmonics and overtones that lead us to line
up notes along common intervals in our scales. He noticed that when notes
aren’t “in tune,” you can hear beating, pulsing, or roughness if they are played at the same time. You can hear this beating if you play the same note on
more than one instrument, and if they are ever so slightly different, if they aren’t
exactly
the same note, you will hear a throbbing or beating that varies in speed depending on how similar they are. An instrument that is out
of tune produces beating tones when the octaves and harmonics don’t line
up. Helmholtz maintained that we find this beating, which is a physical phe-
nomenon and not just an aesthetic one, disturbing. The natural harmonics of
primary notes create their own sets of beats, and only by placing and choos-
ing notes from the intervals that occur among the usual and familiar scales
can we resolve and lessen this ugly effect. Like the ancients, he was claiming that we have an inherent attraction to mathematical proportions.
When a scale is made up of fifths and fourths that resonate perfectly and
mathematically (this is referred to as “just intonation”), all is well unless you want to change key, to modulate. If, for example, the key (or new scale) you want to move to in your tune begins with the fourth harmony note
of your original key—a typical choice for a contemporary pop tune—you
will find that the notes on the new key don’t quite line up in a pleasant-
sounding way anymore—not if you are using this heavenly and mathematical
intonation. Some will sound fine, but others will sound markedly sour.
Andreas Werckmeister proposed a workaround for this problem in the
mid-1600s. Church organs can’t be retuned, so they presented a real diffi-
culty when it came to playing in different keys. He suggested tempering, or
slightly adjusting the fifths, and thus all the other notes in a scale, so that one could shift to other keys and it wouldn’t sound bad. It was a compromise—
the perfect mathematical harmonies based on physical vibrations were now
being abandoned ever so slightly so that another kind of math, the math of
DAV I D BY R N E | 315
counterpoint and the excitement of jumping around from key to key, could
be given precedence. Werckmeister, like Johannes Kepler, Barbaro, and others at the time believed in the idea of divine harmonic proportion described in
Kepler’s
Harmonia Mundi
, even while—or so it seems to me—he was in some ways abandoning, or adjusting, God’s work.
Bach was a follower of Werckmeister’s innovations and used them to great
effect, modulating all over the keyboard in many keys. His music is a veritable tech demo of what this new tuning system could do. We’ve gotten used to
this tempered tuning despite its cosmic imperfections. When we hear music
that is played in just intonation today, it sounds out of tune to us, though that could be because the players might insist on changing keys.
Purves’s group at Duke discovered that the sonic range that matters and
interests us the most is identical to the range of sounds we ourselves pro-
duce. Our ears and our brains have evolved to catch subtle nuances mainly
within that range, and we hear less, or often nothing at all, outside of it. We can’t hear what bats hear, or the sub-harmonic sound that whales use. For
the most part, music also falls into the range of what we can hear. Though
some of the harmonics that give voices and instruments their characteristic
sounds are beyond our hearing range, the effects they produce are not. The
part of our brain that analyzes sounds in those musical frequencies that
overlap with the sounds we ourselves make is larger and more developed—
just as the visual analysis of faces is a specialty of another highly developed part of the brain.
The Purves group also added to this the assumption that periodic sounds—
sounds that repeat regularly—are generally indicative of living things, and are therefore more interesting to us. A sound that occurs over and over could be something to be wary of, or it could lead to a friend, or a source of food or water. We can see how these parameters and regions of interest narrow down
toward an area of sounds similar to what we call music. Purves surmised that it would seem natural that human speech therefore influenced the evolution of the human auditory system as well as the part of the brain that pro-
cesses those audio signals. Our vocalizations, and our ability to perceive their nuances and subtlety, co-evolved. It was further assumed that our musical
preferences evolved along the way as well. Having thus stated what might seem obvious, the group began their examination to determine if there was indeed any biological rationale for musical scales.
316 | HOW MUSIC WORKS
The group recorded ten- to twenty-second sentences by six hundred
speakers of English and other languages (Mandarin, notably) and broke those
into 100,000-sound segments. Then they digitally eliminated from those
recordings all the elements of speech that are unique to various cultures.
They performed a kind of language and culture extraction—they sucked all
of it right out, leaving only the sounds that are common to us all. It turns out that, sonically, much of the material that was irrelevant to their study were the consonants we use as part of our languages—the sounds we make with
our lips, tongues, and teeth. This left only the vowel sounds, which are made with our vocal cords, as the pitched vocal sounds that are common among
humanity. (No consonants are made using the vocal cords.)
They eliminated all the S sounds, the percussive sounds from the P’s, and the clicks from the K’s. They proposed that they would be left with universal tones and notes common, having stripped away enough extraneous information so that everyone’s utterances would now be some kind of proto-singing—the vocal melodies that are imbedded in talking. These notes, the ones we sing when we talk, were then plotted on a graph representing how often each note occurred, and sure enough, the peaks—the loudest and most prominent notes—pretty much all fell
along the twelve notes of the chromatic scale.
In speech (and normal singing) these notes or tones are further modified
by our tongues and palates to produce a variety of particular harmonics and
overtones. A pinched sound, an open sound. The folds in the vocal cords pro-
duce characteristic overtones, too; these and the others are what help identify the sounds we make as recognizably human, as well as contributing to how
each individual’s voice sounds. When the Duke group investigated what these
overtones and harmonics were, they found that these additional pitches fell
in line with what we think of as pleasing “musical” harmonies. “Seventy per-
cent… were bang on musical intervals,” he continued. All the major harmonic
intervals were represented: octaves, fifths, fourths, major thirds, and a major sixth. “There’s a biological basis for music, and that biological basis is the similarity between music and speech,” said Purves. “That’s the reason we like music. Music is far more complex than [the ratios of] Pythagoras. The reason doesn’t have to do with mathematics, it has to do with biology.”7
I might temper this a little bit by saying that the harmonics our palettes
and vocal cords create might come into prominence because, like Archimedes’s vibrating string, any sound-producing object tends to privilege that hierarchy DAV I D BY R N E | 317
of pitches. That math applies to our bodies and vocal cords as well as strings, though Purves would seem to have a point when he says we have tuned our mental radios to the pitches and overtones that we produce in both speech and music.
MUSIC AND EMOTION
Purves took his interpretation of the data his team gathered one step fur-
ther. In a 2009 study, they attempted to see if happy (excited, as they call it) speech results in vowels whose pitches tend to fall along major scales, while sad (subdued) speech produces notes that tend to fall along minor scales. Bold statement! I would have thought that such major/minor emotional connota-tions must be culturally determined, given the variety of music around the
world. I remember during one tour, when I was playing music that incorpo-
rated a lot of Latin rhythms, some (mainly Anglo-Saxon) audiences and crit-
ics thought it was all happy music because of the lively rhythms. (There may also have been an insinuation that the music was therefore more lightweight, but we’ll leave that bias aside.) Many of the songs I was singing were in minor keys, and to me they had a slightly melancholy vibe—albeit offset by those
lively syncopated rhythms. Did the “happiness” of the rhythms override the
melancholy melodies for those particular listeners? Apparently so, as many of the lyrics of salsa and flamenco songs, for example, are tragic.
This wasn’t the first time this major/happy minor/sad correspondence had
been proposed. According to the science writer Philip Ball, when it was pointed out to musicologist Deryck Cooke that Slavic and much Spanish music use
minor keys for happy music, he claimed that their lives were so hard that they didn’t really know what happiness was anyway.
In 1999, musical psychologists Balkwill and Thompson conducted an exper-
iment at York University that attempted to test how culturally specific these emotional cues might be. They asked Western listeners to evaluate Navajo
and Hindustani music and say whether it was happy or sad—and the results
were pretty accurate. However, as Ball points out, there were other clues, like tempo and timbre, that could have been giveaways. He also says that prior to the Renaissance in Europe there was no connection between sadness and minor
keys—implying that cultural factors can override what might be somewhat
weak, though real, biological correlations.
318 | HOW MUSIC WORKS
It does seem likely that we would have evolved to be able to encode emotional information into our speech in non-verbal ways. We can instantly tell from the tone of someone’s voice whether he or she is angry, happy, sad, or putting up a front. A lot of the information we get comes from emphasized pitches (which
might imply minor or major scales), spoken “melodies,” and the harmonics and timbre of the voice. We get emotional clues from these qualities just as much as from the words spoken. That those vocal sounds might correspond to musical
scales and intervals, and that we might have developed melodies that have roots in those speaking variations, doesn’t seem much of a leap.
YOU FEEL ME?
In a UCLA study, neurologists Istvan Molnar-Szakacs and Katie Overy
watched brain scans to see which neurons fired while people and monkeys
observed other people and monkeys perform specific actions or experience
specific emotions. They determined that a set of neurons in the observer
“mirrors” what they saw happening in the observed. If you are watching an
athlete, for example, the neurons that are associated with the same muscles
the athlete is using will fire. Our muscles don’t move, and sadly there’s no virtual workout or health benefit from watching other people exert themselves,