Electronics are like the stem cells of music.
Experimenting with stem cells presents vast possibilities for curing diseases and rehabilitating disabled persons, and this potential has both scientists and spectators howling for progress to be made. Others, however, take up moral and ethical concerns, valuing the sanctity of the source of stem cells over the scientific possibilities.
Essentially, people who are leery of allowing electronics into music can be motivated by parallel concerns, like the sense that something beautiful and organic is lost when music comes from a computer instead of someone’s hands. A repeated complaint, to that end, is that electronic music just has no “soul.”
I would be the last to suggest that there isn’t something uniquely human and fascinating about the coalescence of musical personalities when groups perform live, but to focus exclusively on the physical labors of music is to neglect some of its most interesting possibilities.
One of these possibilities opened up by the use of computers is electronic sound design. This is the ability to choose and create the sounds themselves that are used in music, either by crafting them anew or altering existing sounds.
If you compare the palette that a painter uses to sounds, the acoustic/organic palette would have a color that corresponds to each instrument that exists ““ a color for guitar, a color for piano ““ from which the composer could choose which to use in a given piece of music. But the opportunity to design sounds allows not only for a world of intermediary colors, but new ones as well. Whereas the composer of yore was forced to go to the store to buy his preordained colors, the modern laptop junkie gets to create their own color scheme using whatever bizarre and exotic dyes or substances they wish.
Artists with an eye to sound design, then, get to start from a place even more prior than a traditional composer, and to ask questions that get even more fundamental. Instead of being locked in to an orchestra, piano, or the archetypical rock lineup, composers can ask themselves not only what they want to write, but what noises they want to use. It’s a chance to redefine which sounds can be considered musical, and a chance to make the same notes into a completely different experience.
Of course, all this can amount to a sort of dangerous risk if it doesn’t pay off. While the guitar or the clarinet can be a safe basis for understanding sounds, falling off into a bizarre world of noise that loses track of the beauty of organic music ““ and the way that sounds relate to the human ear ““ is possible. It seems safe to assume that the reason that we have the instruments we do is because they have made sounds considered beautiful to human ears for a long time.
But getting preoccupied with the beauty of organic sounds can lead musicians to unconsciously limit the potentials of the ways sound can be crafted into art. There are ways to sonically discuss the modern world and the vast transformations within society ““ whatever the phrase “the modern condition” conveys ““ that probably can’t be expressed in a four-piece rock band. And when acoustic music gets noisy, or free, at the hands of deconstructionists like Ornette Coleman, I can’t help but imagine that their project and its commentary is inevitably framed by the jazz lineup used.
We may not have to harm embryos to gain stem cells anymore, thanks to recent developments in technology. Hopefully, that debate can be put to rest.
The musical debate, though, isn’t going anywhere. Experimenting with sounds themselves on a deep level ““ and expanding the concept of what we can expect from music ““ will have to do with computers.
There’s no other way to go.
To give LaRue feedback, skip the traditional utensils and e-mail him at alarue@media.ucla.edu.