Work in controlling instrumentation remotely in recent years has tended to center on scientific laboratory investigation, particularly one-of-a-kind, high-power microscopes and telescopes. Or so you might believe.
In parallel, though perhaps more quietly, similar development has been taking place in the artistic realm, in particular with regard to remote control of musical instruments.
An example of this work was demonstrated in October when UCSD's Anthony Davis and UCI's Kei Akagi collaborated on an improvised piano duet performance. What made this more than a commonplace concert was that they performed 100 miles apart -- with Davis playing in La Jolla and Akagi in Irvine. Using computer-controlled pianos networked together via in-house software, each pianist was able to play not only his own but also the other's piano simultaneously across the Internet. This event was sponsored by the Center for Research in Computing and the Arts (CRCA) at UCSD, the Claire Trevor School of the Arts at UCI, and the New Media Arts "layer" of Calit2.
The pianos were Yamaha Disklaviers, digital pianos that have a variety of capabilities: They can be played in live performances, they can play from diskette, and, in this scenario, they can become two-way digitally connected, distributed instruments. To enable each audience to experience the performance in the other location, the performance sites transmitted their respective sounds in the form of digitized data over the Internet to the other location.
Two pieces were duets. Each pianist had the unusual experience of not only playing his piano but being "acted on" by the other pianist who was depressing keys on the remote piano, making this duet not only interactive musically but also tactilely.
How did it feel to the pianists to try to press keys that had already been depressed a split second before? According to Davis, "It's like a game: You play with it. You try to anticipate what your partner will do and play music that's complementary to that. You can also choose to 'stay out of each other's way' by playing on a register far from the one your partner is playing." Taking the playfulness one step further, each pianist could trick the other by silently depressing a key so that the sound of that key was unavailable to the other.
No rehearsal was done prior to the performance, but that was due to lack of access to the instruments rather than a conscious choice not to rehearse. In fact, the pianists had never played together before.
The performance also featured computer/human improvisation: Computer programs, written by UCSD/CRCA researcher Harry Castle and UCI associate professor of Music Christopher Dobrian, "improvised" with the pianists by taking the pianos' MIDI output, transforming it in various ways, and converting it to keystrokes on the pianos. (MIDI stands for Musical Instrument Digital Interface.) One of the net results was that the audience couldn't determine if the sounds were coming from a piano or a computer.
"It's like playing a duet with yourself," says Davis. "You play, then respond to the synthetic tone produced by playing some more. It's like learning a new skill that becomes more sophisticated over time. Of course that doesn't preclude the programmers from changing the ground rules in real time!"
"My experience interacting with computers had been very limited," says Akagi. "So the question for me was: Could I respond to the computer as a real-time improviser and musician? The computer was like a fellow musician, and I found the interaction very natural because of Chris' software."
According to Akagi, both composition and improvisation are typically based on simple ideas on which the composer or performer builds. What we take to be musicality, as far as performance is concerned, can't be captured adequately by giving the computer more choices. In fact, the more choices a computer has, the more likely it will choose something unmusical. So the collaborators chose to simplify Dobrian's sophisticated program, which made the computer's responses more "human-like."
"One of the interesting research issues," says Miller Puckette, associate director of CRCA where Davis performed, "is that if you have two people performing at the same time in different places and your goal is to synchronize those performances, how does the signal delay affect the musicians and the audience?" According to Puckette, there was a very small fraction of a second delay in the synchrony between a piano key being depressed and the sound of the note being played in the other location. The delay was dominated by the pianos' own actions, with the network delay adding something less than 1/100 of a second. Such delay proved imperceptible to the audience.
The video delay, however, was another matter. The performance was enhanced by the use of MS NetMeeting to provide a video stream of the remote musician to each audience. The audiences could not see each other but could hear audience response from the remote location. Because the video signal had more delay than the audio signal, the audience in La Jolla, for example, heard Akagi play a chord, then saw it played on the video. "The amazing thing," says Puckette, "was how quickly the audience got used to and accepted this sequential experience."
Another interesting issue presented by the performance was the convergence of real and virtual presence: Was the audience here, at the remote location, or both at the same time? Continues Puckette, "While you couldn't see the audience at the other location, you knew you were participating in something larger. I had a sense of presence where I was, but I was periodically reminded that I was also participating in some sense at UCI."
What's the point of this experiment? "The simple answer is that we can reach twice the audience," says Puckette. "The harder answer is that we wanted to improve on the results of a satellite link experiment, which produced noticeable delays that proved disruptive to the quality of the performance. This was the first time we tried sending data over the Internet in real time, and we were pleased to see this experiment actually worked."
That was possible because the MIDI protocol used to send information is extremely compact: It requires only three numbers per note, which describe the note itself and what the performer did to create the sound, such as how hard he pressed the key and whether he used a foot pedal.
But Davis had a different goal. "For me," he says, "it's important that we not use the Internet to try to faithfully render the normal concert-going experience but rather create a new kind of experience - for both the performers and the audience."
Will this experiment expand to include additional sites? With MIDI's effective compression scheme, there is no risk of overloading the network bandwidth, so the issue then becomes a technical exercise of "traffic control" managing data from more than two places. Artistically, though, the situation becomes tougher because of the potential for cacophony. In fact, many performers have consciously avoided experimenting with the Internet because Internet-based delays are unacceptably long from an aesthetic perspective.
Was the experiment successful? "Success," says Davis, "depends on whether the music is coherent and expressive. Were we able to negotiate with the technology to create a musical idea? I think we were."
"Another component of success," adds Akagi, "is whether it's successful technologically. At UCI, we had discussed what we would do if a computer crashed - would we make it an obvious part of the performance or try to hide the fact by continuing to play until the system was rebooted? Luckily, we got through the evening without that problem."
What next steps does this experiment suggest? "One problem I see that needs addressing," says Akagi, "is the computer's simple-minded understanding of silence as the absence of sound. But silence must be played - it's a deliberate part of a performance. Musicians know when not to play - it's one of highest levels of communication."
Akagi also wants to experiment with ways to translate his reactions as an improviser into computer-understandable parameters. Some may not be translatable, but then he suggests that computers may not have to develop the subtlety of human performers. Instead, humans in fact have been adapting to machines, creating a new type of music.
"One thing many people don't notice is that technology has transformed the way musicians perform," says Akagi. "The advent of drum machines almost two decades ago has made drummers want to sound like machines in certain types of music. Other examples abound in which musicians, adapting to the limitations of technology, have produced new ways of playing. And that fact has become part of our aesthetic vocabulary. As interactivity with computers becomes further integrated into our musical culture, it will become more intuitive and transparent to those who grow up with this technology."
This experiment represents another step forward in overcoming limitations imposed by geographical separation, making it possible for artists to collaborate successfully over very long distances. Even so, many artists consider themselves vagabonds: They need to spend a lot of time getting to know each other before determining whether to collaborate. So, while the Internet is not likely to substitute for this face-to-face interaction, it does appear likely to reduce the time spent investigating possible collaborators and multiplying the number of collaborations themselves.
New Media Arts Layer
Center for Research in Computing and the Arts, UCSD
Claire Trevor School of the Arts, UCI