The Sound of Science: Calit2's Sonic Arts Research and Development Group

By Tiffany Fox, (858) 246-0353, tfox@ucsd.edu

San Diego, Calif., April 28, 2011 — When most people think about audio systems, they think about bumpin’ car stereos, thunderous multiplex surround-sound or the ultimate in home theater gaming environments. But for the researchers in the University of California, San Diego’s Sonic Arts Research and Development group, audio systems are a critical tool for improving architecture, understanding complex scientific data and even saving lives.
Peter Otto
Professor Peter Otto directs the Calit2  Sonic Arts Research and Development group. “Our focus is on auditory realism," he says. "Knowing how sound will affect a working environment is crucial for intelligibility and communications."

“For mission-critical environments like hospitals or complex communications facilities, knowing how sound will affect a working environment is crucial for intelligibility and communications,” explains Professor Peter Otto, director of the group and a faculty member in UCSD’s Department of Music. “Our focus is on auditory realism – being able to simulate, emulate, reproduce or predict the auditory signature of a particular acoustic or architectural environment, or spatially isolating or highlighting particular sound sources while suppressing others.”
 
A dark, cave-like space with high walls and rows of audio mixing boards, the Audio Spatialization Laboratory (or Spat Lab, as it’s nicknamed) is where the group does much of its development work in audio spatialization and sonification. Housed at the UC San Diego division of the California Institute for Telecommunications and Information Technology (Calit2), the Spat Lab is of the signature laboratories devoted to audio research and music composition at the Calit2-affiliated  Center for Research and Computing in the Arts (CRCA).

Otto’s research team includes Audio Research Engineers Suketu Kamdar and Toshiro Yamada; post-doctorate composer, researcher and UCSD alum Nathan Brock; and Computer Music Ph.D. candidates  Joachim Gossman, and Michelle Daniels.

Also conducting research in the group is Calit2-affiliated neuroscientist  Dr. Eve Edelstein. Edelstein, who is also trained in architecture, is analyzing how the noise created during the provision of healthcare can hamper communications between doctors, nurses and patients, and even lead to life-threatening situations. 

Modeling Sound, Mitigating Harm

In a hospital environment, explains Edelstein, “detection of body sounds like heartbeat and lung function can be diminished by competing or masking sounds,” such as constantly running HVAC units or medical equipment that periodically sounds an alarm.

“During a shift change, when the nurses are all exchanging critical information about their patients at once – and in the presence of equipment alarms – the impulse noise can reach, at its peak, up to 120 decibels. That’s equivalent to the sound of a jet engine.”

Even without the excessive noise of a shift change, Edelstein’s research reveals that the average noise level in a hospital environment is 85 decibels, which is sufficient for noise-induced hearing loss if one is subjected to it over time.

“Think about the effect of this ambient masking noise on doctors,” she adds. “Tolerance for this noise level goes down as they fatigue, and interferes with concentration and communication.”

Compounding the problem, says Edelstein,  is a long list of drugs with look-alike, sound-alike names, like hydroxyzine (which is used to treat allergy-induced itching, nausea and anxiety) and hydralazine (which is used to treat high blood pressure). In a noisy environment, with an electro-cardiogram machine beeping in her ear and an audible conversation going on in the adjacent room, a nurse might mistake the correct medication or dosage, which could put the patient at risk for severe side effects or even death. The national Institute for Healthcare Improvement estimates that nearly 15 million instances of  hospital-induced medical harm occur in the US each year – at a rate of more than 40,000 instances per day. 

To study the role of competing noise and architectural design in patient care, Otto, Edelstein and their colleagues are conducting experiments in Calit2’s StarCAVE virtual reality system. The StarCAVE is a five-sided, immersive 3D environment where scientific models and animations are projected on 360-degree screens surrounding the viewer, and onto the floor below.

Watch a video of a simulated hospital environment recorded inside the StarCAVE with Toshiro Yamada and Eve Edelstein. Length: 1:27. [Windows Media and a broadband connection required.]

The CAVE’s unique visualization and sound systems allow the researchers to simulate immersion in a real hospital environment, where recordings of ambient sound and equipment noise interact with actual clinical conversations recorded with the help Dr. Kevin Patrick, director of the Calit2-based Center for Wireless and Population Health Systems.

Using the Sonic Arts’ Sound Server – a graphically driven, networkable framework for placing sound objects in 3D environments – Joachim Gossman is able to manipulate  real-world data and virtually model them in a neutral sound environment. He might take coefficients for typical building materials, for example, and combine them with recorded sounds to determine how building material choices will affect intelligibility of communications in a busy nursing station.

The technology is so nuanced that it can account for moisture in the air and how that moisture might affect sounds should a structure be built at sea level. 

“This acoustic architecture simulator is unique,” Gossman says, “because other systems use fixed positions of sound sources, listeners and walls, etc., while in our case we can modify buildings in real-time as people move through them and the results are immediately audible.”

The team’s research, called SoniCAVE, is funded by a grant from Calit2’s Strategic Research Opportunities program, with matching funds from a grant from San Diego-based HMC Architects. SoniCAVE stems from a project known as CaveCAD, headed by Edelstein and Eduardo Macagno, a Calit2-affiliated professor and founding Dean of UCSD’s Division of Biological Sciences.

Adds Edelstein: “The advantage with the VR environment is that we can predict how building materials and other design choices affect the sound in a room without the time and expense of building several mockups of the room.

“We can tell the VR system what the materials consist of,” she continues, “be they brick or glass or marble. Marble floors, for example, are beautiful, but they may make the ambient noise in a room a lot louder.  Parallel walls are inexpensive and easy to design, but cause echoes and other problems that can affect intelligibility and the overall noisiness of an environment.

“We can also add data about the HVAC system in the room, discover how multiple adjacent rooms might interfere with one another and model it all in one place,” she adds. “Architects can do this now on a desktop, of course, but they’re always at a bird’s eye view. Often, they have to rely on predictive modeling software that’s based on large theaters and concert halls, not hospitals. And even those models may not include all construction materials.”

According to Edelstein, the CAVE not only provides architects with a unique perspective, it could completely change the sequence of architectural design.

Hear a clip of 'look-alike, sound-alike drugs' recorded using the Sonic Arts’ Sound Server – a graphically driven, networkable framework for placing sound objects in 3D environments. Length: :07. [Windows Media and a broadband connection required.]

“Currently, concepts are developed in sketches and worked on before a client can see them,” she explains. “With the CAVE, we can avoid this time lag because the clients can step right into a virtual model and see the building for themselves.”

Adds Otto: “The system we’re developing enables you to both see and hear the consequences of design variables. If you can stand in the room and see and hear how the architecture changes sound, then you can start making much finer judgments. Our immediate objective is to test and influence the quality of acoustical design in real hospital environments, but there are a lot of other applications, in health, security, education, and the arts and entertainment, where we might want to model the same type of data. The hope is that this will become a diagnostic test for architects. There are really concrete economic and qualitative advantages to working this way.”

Virtual Haircuts, Safer Cars and a “Puff of Wind”

In keeping with Calit2’s parallel research theme of mixed virtual and physical collaboration, the Sonic Arts Research and Development group is also on the forefront of research into sonification (using sound to help convey time-based or graphical data) and videoconferencing around massive network displays, such as Calit2’s ultra high-resolution HIPerSpace wall.

The Sonic Arts team’s efforts in this realm includes producing super-realistic sound and music imaging for data presentation, visualization, entertainment, art and communications.

“There are tough audio management problems in these environments,” remarks Toshiro Yamada. “The typical surround-sound system, for example, assumes a specific speaker setup to generate an audio ‘sweet spot,’ where the auditory imaging is stable and robust. But not all rooms can accommodate the proper specifications. This minimizes a sweet spot that’s already small.”

Hear a sound clip of the 'virtual haircut.' Length:4:29.

In their work to expand the elusive sweet spot, Yamada and visiting researcher Filippo Fazi from the University of Southampton, developed a new flexible panning algorithm that allows for better surround-sound imaging and sound field control. “This algorithm solves the problem of arbitrary speaker location, because no matter where you are in the room, the speakers all work together to optimize the sound for you. This means that the user will always be in the sweet spot,” explains Yamada.

With initial funding from long-time Calit2 industry partner, Qualcomm, the group has also developed a system for delivering highly localized audio through a compacted array of speakers. When the system – known as ”SoundBender” – is in “beam mode,” different source content can be steered to various angles so that different sound fields can be generated for different listeners, based on their location.  The audio beams are purposely narrow to minimize leakage to adjacent listening areas.

 “It lets us create a private listening experience in a public space,” remarks Suketu Kamdar. The SoundBender can also be used in “binaural mode,” which provides vivid virtual surround sound and enables spatially enhanced conferencing and audio applications.

Adds Kamdar: “This is a really intelligent system, and the applications are endless. You can imagine a scenario where each speaker array is addressing three different people with targeted messages, perhaps in different languages, and users just stand in the audio beam that’s broadcasting the content they want to hear. We even have a car company looking into the technology for use in automobiles, where one audio beam could provide GPS directions for the driver, and another could provide music for passengers in the back seat.”

Kamdar noted that the team is also working with Calit2-affiliated researcher Mohan Trivedi’s Laboratory for Intelligent and Safe Automobiles (LISA) to integrate the array into a driver alert system that Trivedi’s team is developing. The U.S. Armed Forces are also interested in using the technology to replace the headphones that military personnel wear while on command. For fun, the team likes to use the array to demo what they call a ‘virtual haircut,’ where a recording of a pair of clippers swirls around one’s head just as it would sound in the barber shop.

The researchers plan to outfit the Calit2 HIPerSpace wall and the institute’s other large, multi-screen displays— such as the inexpensive 3D VR system known as the NexCAVE – with the speaker arrays to provide localized, targeted audio so that multiple people could collaborate simultaneously.

Kamdar and Yamada are also working to harness the technology at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, which has established a special partnership with Calit2 to build a state-of-the art visualization and virtual reality research and training facility at its campus. The partnership with KAUST also includes plans to equip the massive displays there with microphone arrays that could capture discrete voices, even in a group situation, without the need for a handheld microphone.

Improving audio imaging for motion pictures is another research thrust for the Sonic Arts group. Audio imaging involves recording and reproducing sound in a way that preserves the spatial location of the sound source. A recording of the sound a truck makes as it passes by wouldn’t just sound like a vague whoosh – with the group’s panning algorithms and sound motion control techniques, the sound becomes a full-blown audio illusion as it originates from one side of the room, passes through and then exits out the other side, Doppler effect and all.

“You might even feel a little puff of wind,” jokes Otto.

The team recently paired with Disney Studios to examine ways to enhance the audio post-production process for motion pictures by utilizing advanced networking strategies for distant collaboration.

“This project involved streaming high-quality media between sites over photonic networks, along with video-conferencing and control information, to allow unprecedented freedom in the location of personnel and computing resources,” said Nathan Brock.”This flexibility increases collaborative opportunities while making cinema post-production workflows more efficient and decreasing costs.”

The project, supported by Disney and NTT, builds on an ongoing five-year collaboration with Lucasfilm/Skywalker Sound to use networking to improve audio post-production, and operates in tandem with a pending grant with a major motion picture studio to enhance surround-sound for cinema. 

For Otto, working in Sonic Arts R&D is another chapter in a varied career that has included cello performance, composition, software and hardware design, audio engineering and facilities design, and teaching. What began and is still motivated by a love of music and sound developed into inexhaustible curiosity about all aspects of sound, not just the art, but the science too.

“You know, as a student of the arts I wouldn’t have expected to be collaborating with neuroscientists, architects and electrical engineers. But that’s part of the essence of CalIT2,” he remarks. “You have coffee with a nano-scientist and ride the elevator with an expert on brain imaging. Pretty soon you’re talking about how new materials and analytical techniques might improve the performance of communications systems or environmental acoustics, and in turn, how new technologies might make a hospital work environment healthier both for patients and the people who work long, stressful shifts in those environments. An hour later you’re in a meeting with a world authority on networked 3D visual imaging, discussing how advancements in computer music might be applied to collaborative visualization environments.

“This place is endlessly fascinating. The hours can be long and it’s hard to find enough time to work in the labs and studios, but this is seriously fun. I can hardly imagine a more stimulating place to work.” 

Media Contacts

Tiffany Fox, (858) 246-0353, tfox@ucsd.edu