«NG SO R U U T N P D CA k k at z HOW T EC ar H m N O O L IC G Y S MU HA SC HANGED UNIVERSITY OF CALIFOR NIA PRESS BERKELEY LOS ANGELES LONDON ...»
Another crucial type of manipulation comes from the use of the stereo ﬁeld—the sonic “stage” in which sounds occupy and move through space in a recording. Consider “Strawberry Fields Forever” once again. When listening through headphones, the song begins as if an organ or perhaps ﬂute trio is playing softly into your left ear. (Actually, the sound comes from a Mellotron, an early synthesizer that played prerecorded tape loops.) A chord in the electric bass then sounds in your right ear, followed by John Lennon singing “Let me take you down,” seemingly in the middle of your head. Ringo Starr joins the fray, playing the drums as if he were sitting on your left shoulder. A guitar slide, traveling through your head from left to right, rounds out the opening ﬁfteen seconds. Clearly, the Beatles (in collaboration with their producer and engineer) created a musical space unique to the work, one with no possible physical counterpart.
Often the stereo ﬁeld is used simply to enliven a song’s texture or to provide added bounce or swing, but the way musical space is deployed can also enhance the meaning of a song. In “Strawberry Fields Forever,” 42 CAUS E S it is the fantastic disposition of sound that persuades us that “nothing is real.” The guitar and drums moving slowly from left to right in the opening of Jimi Hendrix’s “Crosstown Traﬃc” (1968) musicalize the song’s title by imitating the sound of passing cars. Late in Led Zeppelin’s “Whole Lotta Love” (1969), Robert Plant’s voice travels from right to left to right with ever greater reverberation (c. 4:19–4:27), as if he is plunging into a cavernous space. Perhaps it is meant to illustrate the perceived emptiness of the woman he has just addressed with the single-entendre, “Way down inside, woman, you need it.” Radiohead’s “Creep” (1993) features the violent tearing sound of a distorted guitar each time Thom Yorke admits, “But I’m a creep” (c. 0:58, 2:01, and 3:28). The ﬁrst two times it appears, the guitar erupts in the right channel, then moves front and center, ﬁlling the stage; the sound seems to depict the anger of the song’s persona at the possibility that he is unworthy of the woman with the “face like an angel.” The last appearance of the distorted guitar, however, is much diﬀerent; it is distant and barely audible, having been pushed to the left rear corner of the stage. The sound is dulled and softened, suggesting the bitter resignation of someone who now believes the worst about himself. As careful listening and a good pair of headphones will reveal, the use of the stereo ﬁeld can add depth to a recording, both physically and expressively.106 A more recent development in sound manipulation goes under the general heading of digital signal processing, or DSP. DSP far transcends the limitations and possibilities of magnetic tape. With rhythm quantization, for example, a performance with an unsteady tempo becomes metronomically precise as all notes are forced to fall on the closest beat. Pitch correction follows a similar principle, pushing pitches up or down to the nearest speciﬁed level. Moreover, both can be applied in real time. Thus I could go into a studio, belt out “Copacabana” in my wobbly pitch and uncertain rhythm, and have it come back at me through the monitor— as I am singing—sounding closer to Barry Manilow than nature or good sense should allow.
Digital processing, though widespread, is a controversial practice. As singer and producer Richard Marx puts it, “You have a guy or girl who literally can’t sing one phrase in tune to save their lives, and I can make them sound like they can. It’s misleading—but it’s not overly uncommon.”107 In an episode from February 2001, the animated television show
CAUS E SThe Simpsons skewered the prevalence of pop processing. Bart Simpson and three of his friends are brought together by a successful producer to form the next big “boy band.” They have the right looks, the right moves, the right attitudes—everything except for musical ability. At ﬁrst they can only croak out the lines to their song. The producer shudders, heads over to an oversized console labeled “Studio Magic,” and turns the “voice enhancer” dial. The boys sing again, only this time we hear buttery voices,
perfect intonation, and exquisite timing coming from the studio monitors.108 This send-up only slightly exaggerates reality. The website for AutoTune pitch correction software and hardware made this triumphant claim:
“Auto-Tune corrects ‘intonation’ problems of vocals and other solo recordings—in real time! In goes out-of-tune screeching, out comes bewdiful [sic] singing.”109 But there is another side to the debate, and many feel that the beneﬁts of processing are far from insidious. Producer Matt Serletic has pointed out that the technology allows performers to minimize the stress and strain of recording sessions. “You no longer have to beat an artist into submission by asking them to pound out a vocal 15 times to get that one magic performance—which can result in a recording that’s technically accurate but passionately not convincing. With vocal processing, you can get the passion and then ﬁx something.”110 Moreover, the technology allows singers to produce otherwise impossible sounds. Part of the appeal of Cher’s 1999 hit “Believe” was certainly the slightly stuttered, mechanical sound of the title word, an eﬀect created through digital processing.111 Like splicing and overdubbing, DSP is a tool that can be, and has been, used in a variety of ways, both laudable and censurable.
It is important to realize that sound is manipulated in the studio not (or not typically) by performers, but by a variety of sound engineers and producers, sometimes referred to collectively as recordists. Recordists fall outside (or perhaps in between) the traditional triad of composer, performer, and listener. They might be thought of as sound shapers, artists in their own right who collaborate with performers and composers.
Because their work is done mostly behind the scenes, their inﬂuence is not as widely or deeply appreciated as it should be, though a growing body of literature is starting to remedy the situation.112 Recording technology can be used to manipulate sound not only in the 44 CAUS E S studio. In chapter 6 we will see how, beginning in the 1970s, hip-hop musicians transformed the phonograph into a performing instrument capable of generating complex compositions. Although turntablism, as their art came to be called, was new in its particulars, a long tradition of harnessing the technology for similar ends preceded it. As early as the 1920s, avantgarde classical composers treated the phonograph as a means to develop new sounds, and an inﬂuential school of thought developed around the possibility of what was sometimes called Grammophonmusik (the subject of chapter 5). Beginning in 1939, American experimental composer John Cage began using the phonograph in his music. The earliest example was Imaginary Landscape No. 1, scored for muted piano, cymbal, and two variable-speed turntables. It requires two musicians to “play” the machines by altering the speed of the discs and by rhythmically raising and lowering the styli. Although Cage was attracted to the possibilities of the phonograph, he had little interest in its intended use. “The only lively thing that will happen with a record,” he once said, “is if somehow you would use it to make something which it isn’t. If you could for instance make another piece of music with a record... that I would ﬁnd interesting.”113 Forty years after Cage’s initial experiments, artist and composer Christian Marclay continued what might be called avant-garde turntablism. On one occasion, he created an art installation consisting of dozens of records arranged on a gallery ﬂoor, and instructed visitors to walk across them.
Later, Marclay gave a concert in which he took the scuﬀed and scratched discs and, using several turntables, performed a musical collage of pops, clicks, and some heavily obscured tunes. “Instead of rejecting these residual sounds,” Marclay explained in a 1998 interview, “I’ve tried to use them, bringing them to the foreground to make people aware that they’re listening to a recording and not live music. We usually make abstractions of the [recorded] medium. For me it was important... to give it a voice.”114 If recording could foster the work of composer-performers, it could also separate composers from performers. Musique concrète was an early manifestation of this radical change. The genre was the inspiration of Pierre Schaeﬀer, who in 1948 began composing musical works by mixing and arranging nonmusical sounds collected via microphone.115 In the classical tradition, music is typically ﬁrst conceived by the composer and then interpreted by performers. But musique concrète dispenses with performers by
CAUS E Sstarting with sound rather than score; as the name suggests, it begins with the concrete rather than the abstract. Schaeﬀer’s ﬁrst such “concrete” piece was Etude aux chemins de fer (1948), a “railway etude” that, in the long history of train-inspired musical works, was the ﬁrst to be derived solely from actual train sounds, which Schaeﬀer collected from a Paris station. In the United States beginning in the 1950s, a similar compositional approach arose known as tape music, which likewise treated recorded sound as raw material. Pioneer tape music composer Vladimir Ussachevsky, for example, kept dozens of individually boxed and labeled loops in his studio as a painter might keep jars of paint, ready for use in any future work.116 John Cage used a library of six hundred diﬀerent sounds to assemble (through chance means) thousands of minuscule bits of magnetic tape into Williams Mix (1952). Like Schaeﬀer, both worked directly with sound, leaving performers out of the loop, so to speak. Extending the possibilities of tape music is the more recent practice of digital sampling, a method in which sound is converted into highly manipulable data. The range of material from which composers draw is vast, including speech and environmental sounds, as well as live and recorded music; as we will discover in chapter 7, the practice raises diﬃcult questions about every aspect of composition, from aesthetics to ethics. In fact, the very possibility of manipulating sound after its creation—from splicing to digital pitch correction—forces us to reformulate our ideas about composition, performance, and the relationship between the two.
Music and musical life have been transformed in the age of recording.
However vast and complex, this transformation can be traced to ways in which users of the technology respond to the seven interdependent traits that deﬁne recording. Yet recording does more than inﬂuence the activities of composers, performers, and listeners. It aﬀects the relationship among these actors and in fact challenges the stability, even the validity, of the triad. It is no longer necessary for listeners and performers, or for performers and composers, to work together in order to create music. Yet at the same time, listeners and composers have discovered a more intimate relationship, one that can bypass the mediation of performers, while performers can work in solitude, without the need to stand before listeners.
46 CAUS E S Performances and works are no longer clearly distinct, for recordings can take on the function and meaning of both. Just as recordings can be heard as spontaneous interpretive acts, their repetition can transform them into compositions, works that can be analyzed, historicized, canonized, politicized, and problematized. Nor are production and reproduction so easily separated when preexisting sounds can be manipulated in real time. With recording, listeners need not simply receive music, for they have an unprecedented control over the sounds they hear. While there have always been composer-performers—artists who interpret their own works—with recording we can conceive of listener-performers and listener-composers.
Recording thus not only aﬀects the practice of music, it shapes the very way in which we think about music: what it is, can, and should be.
A fragment of a drum solo: the thump of the bass, the crack of the snare, the sting of the hi-hat, all combined in a distinctively syncopated pattern.