Notes from the underground

It’s really nice, though exceedingly rare, when an interesting speaker comes to campus and actually gets a decent student turnout. It’s so hard to tell where a given lecture or presentation is going to fall on the spectrum which runs from fabulously interesting to horribly dull. Every day, the DA has ads for a slew of events featuring powerful-looking speakers, with titles like “Hegel and the Moral Dilemma,” “String Theory and the Existence of God,” or “Why Cheeseburgers Don’t Jive with Existentialism: a Post-Structuralist Critique.” How is a student to choose? Most often, the choice is to get started on your work for the next day.

All this having been said, it was a pleasant surprise to walk into Bernhard 30 and find a packed room, all waiting to see composer Christopher Dobrian and pianist/composer Daniel Koppelman present compositions featuring music improvised by a computer. I don’t think my plug in last week’s column, where I predicted that this presentation marked the beginning of the end for humanity, was the reason for the big turnout. This is just some really cool stuff.

This sort of presentation is always a strange show; although Dobrian and Koppelman are showing off their abilities as composers, they are also attempting to give the audience an understanding of how the technologies involved work. Dobrian, a professor at the University of California in Irvine, has been a pioneer in the field of computer music improvisation, while Koppelman has worked extensively as a performer and composer using these technologies. The audience was therefore given a look at both sides of the coin, an arrangement that proved to be quite helpful in understanding both the theoretical and practical uses of computers in improvisation.

Koppelman presented a work of his first, the title of which was a play on the title of a Bill Evans CD, “Conversations with Myself.” The piece was in three movements, of which he performed two, written for piano, Disclavier (a player-piano of sorts that reads computer signals), and synthesizer. The first movement was a duet for Koppelman live and Koppelman recorded; he played back an improvisation of his on the Disclavier and then improvised over it on the standard piano. The musical content was decent, if somewhat tiresome and cliched at times, but the concept is fascinating. I couldn’t help but think of those movies where they cut the screen and film two shots with a single actor or actress in order to get the effect of twins—I’m thinking here of a film like the original Parent Trap. I don’t mean to associate Koppelman’s compositional technique with a 1960’s Disney film, though, as I find this to be a far more interesting concept in music than in film.

The second movement required a synthesizer that wasn’t available at Williams, and so it wasn’t performed for this presentation, a setback that demonstrates one of the problems with writing for specific technologies: quite simply, you need the technology to play the piece. While this is true for any instrument, the same level of standardization has not yet been reached in the field of computer music. As an aside, this provides a strong argument against the movement to build new instruments and use non-standard tunings: standardization is important if you desire multiple performances. I’m not going to go into this issue right now, but it’s an interesting question to tackle. The third movement of Koppelman’s piece was more interesting than the first, using synthesized sounds along with the piano to create a very neat sound, especially since both the piano and the computer were playing very fast, furious passages.

Dobrian presented three of his pieces, each of which used technology that he himself had programmed. That’s what’s so interesting about these computer music guys—they have had to spend a lot of time working not just on music, but also on programming proficiency. Furthermore, there’s a tendency to use the software almost recklessly; since they are the programmers, they want to show off everything that their programs can do.

While Dobrian is obviously a brilliant programmer and has a keen musical mind, I didn’t like two out of the three compositions he presented. The first, written for MIDI piano alone, was a study in improvisation using specific parameters. The computer would be given assignments for the probability of each note to be played. From those assignments, Dobrian could control the shape of a given section, but the computer would control the specifics—a form of improvisation. I got into the piece for a while, but then I felt totally lost and had no sense of where I was in the overall structure. Interestingly, this piece was written as a response to super-serialized music, which often was accused of leaving the listener with the same sense that I had listening to Dobrian’s composition. I asked him whether he felt it was a success musically, and he answered in the affirmative, but I would have to disagree. Dobrian wrote a simple program that displays dots on a computer screen, corresponding to different pitches on the piano that goes with the aforementioned composition. I don’t understand why the visual would be necessary if he were confident about the musical validity of the piece itself, but then, I’m not confident about the musical validity of the piece itself.

The second composition was for electric guitar, with a computer improvising on the simple material provided by the guitar. Dobrian again gave the audience a visual, this time choosing a video of a robot painting a portrait (which was quite interesting and funny, though superfluous). The visual was very distracting, but I don’t know that the piece would have worked even if I had been able to concentrate fully on its content. It seemed like an interesting exercise, with interesting techniques, but not a very successful musical statement.

The last piece was the most successful by far, and involved Koppelman playing the Disclavier. Here, the computer “listened” to his playing and responded in a variety of partially programmed, partially improvised ways. At times, it would seem as though it was playing along with him, when in actuality it was playing a few unnoticeable microseconds behind. The computer would also use a variety of synthesized sounds, creating a fantastic texture between the piano and the uniquely programmed tones. All in all, this was the most successful piece I heard all day, coming from a strictly musical standpoint.

All of this technology is extraordinarily interesting, but it still seems to me to be just a tool. Human compositions are more interesting because they reflect human thought processes and human emotions. But if these technologies can be used to create a giant “matrix,” if you will, wherein humans will be used as energy providers for a world dominated by machines, then I’m all for it.