Nearly all UCLA students are aware of the good-natured rivalry
between North Campus and South Campus. It essentially comes down to
the world of humanities and the arts versus the world of the
sciences and mathematics. Although students recognize the
differences between the two, few realize that these worlds collide
daily at a precise point on campus ““ Schoenberg Hall,
otherwise known as the home of the music and ethnomusicology
departments.
In fact, Schoenberg Hall even boasts the somewhat dreaded,
quintessential image of South Campus ““ the lab. But in this
lab, students aren’t creating chemical solutions or observing
cells under a microscope. They’re performing spectral
analysis of musical instruments.
“The instrument I’ve been working on is called the
gamelan, which is an Indonesian metallophone,” said Roger
Kendall, professor of systematic musicology. “We’ve
done spectral analysis of the gongs, spectral analysis of this
instrument, trying to determine both its tuning characteristics,
which are unusual, and also its timbre ““ its tone
quality.”
Scientific instrumental analysis doesn’t stop at the
acoustics lab. The UCLA Department of Ethnomusicology offers
master’s and Ph.D. programs entirely devoted to the study of
systematic musicology. The program consists of two streams ““
scientific and empirical, as well as philosophical and
critical.
“We study both the science of how you hear music and the
psychology of how you hear music, the perception and cognition of
music,” said second-year systematic musicology student Janet
Hau.
Next quarter, Kendall plans to devote an entire class solely to
the relationship between music, science and technology.
Most musicians acknowledge music’s strong basis in
mathematics. This relationship is especially evident in the
compositions of baroque composer Johann Sebastian Bach, whose
intricate, elegant compositions are most often compared with
mathematics. Similarly, music possesses an analytical appeal to
many performers. A musical score, like a computer program or
clockwork mechanism, begs to be taken apart and understood.
“A music student has to approach music with some
systematic discipline,” said second-year biophysics and
economics student Tobias Falzone, who also plays the bassoon in
both the UCLA wind ensemble and symphonic band. “An engineer
or scientist also has to have an analytical and systematic way of
approaching problems. So in that sense, the two fields may involve
certain neural activities that are very similar.”
Yet Kendall’s approach goes beyond making these obvious
connections and instead demonstrates how understanding the abstract
concepts of sound waves will enable musicians to be better
performers.
“If you’re a race car driver, you can do a much
better job communicating with the people that are working on your
car if you know how the car works,” said Kendall. “If
you, as a performer, understand the science behind the musical
systems you’re using (the harmony and tonality and theory),
then I think it gives you a way to communicate. If you’re
ever a recording artist, you can communicate with the engineers
that are involved. If you’re performing on stage, you
understand the acoustics of an acoustical environment and can
understand why your instrument goes sharp and why it goes flat. Not
everyone you work with is a musician like yourself. As a
professional, it’s always worthwhile to understand the
mechanisms of the system that you’re running.”
Music has evolved not only with scientific study but also with
technological advances. The MP3s we take for granted today only
emerged after years of wave study that ultimately culminated in the
breakthrough of compressed sound. Today, a composer sits down at
the computer or synthesizer to begin a composition more often than
assembling a full orchestra. Compositional computer programs make
writing and producing music accessible in a way that Mozart never
would have dreamed.
“The best thing about typing the score is that you
don’t have to have a full orchestra in front of you; you can
still hear how it sounds,” Yau said.
Apple’s new program GarageBand takes composing to an even
more user-friendly level, boasting, “You don’t have to
play the piano. You don’t have to read music. You don’t
even have to have rhythm. If you know what you like when you hear
it, you can make your own kind of music.
But in spite of these numerous technological and scientific
advances, music remains an art ““ one that even science and
technology cannot reproduce without flaw. Although a synthesizer
can imitate hundreds of instruments and styles of music, computer
generated sounds are no substitute for the real thing.
“It’s still not perfect, and we’re still
working on it. Some instruments just don’t sound good,”
Kendall said. “Part of the sound is not just in the
harmonics; it’s in how the notes connect to each other. This
is not well-understood at this point: how to make that sound
natural so when you’re playing the violin on an electronic
synthesizer, it sounds like a real violinist. Part of it is
connected into musical expressiveness because to sound natural, it
has to sound human-produced. It’s really difficult because we
just do not understand all the intricacies of how human beings
interact with their instruments.”