This week’s performance of “Completer: An Immersive Experience” at the University of California, San Diego might be the closest the average person comes to experiencing synesthesia—the condition by which some people have the sensation of “hearing” images or “seeing” sounds.
The live performance, which takes place at 6 p.m. Thursday, May 15 in UC San Diego’s Atkinson Hall Auditorium, will make use of special visual programming language that allows the performers to manipulate on-screen images in real-time to visually correspond with the live electronic music they’re performing. In other words, they’ll play the roles of DJ and VJ simultaneously.
It’s part of the Qualcomm Institute’s IDEAS series, or Initiative for Digital Exploration of Arts and Sciences, which aims to encourage all types of artists and technologists to take advantage of the advanced audio-visual facilities, services and personnel at QI, which is the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2).
As both an artist and a member of the QI audio-visual team, “Completer” co-creator Samuel Doshier plans to take full advantage of the auditorium’s performance space, 7.1 Meyer Surround System and 4K projector to make the experience of listening to live “indie electronic” music (which combines both acoustic and electronic sounds) more engaging. Doshier will be performing with UCSD Price Center audio engineer David Lopez de Arenosa and Spencer Mussetter, a graduate in mechanical engineering. Doshier and Lopez de Arenosa are graduates of the UC San Diego Interdisciplinary Computing and the Arts/Music (ICAM) program (the first iterations of “Completer” were created for their ICAM senior projects).
“We have a bone to pick with performers of live electronic music,” says Lopez de Arenosa. “The whole point of going to a concert is that it has live elements—that’s where the magic happens. But a lot of electronic musicians don’t really ‘perform’ live. They hide behind a laptop and press play, or they turn some mysterious knobs and press random buttons. Nobody can really tell what they are actually doing, which means there’s a huge disconnect between the performer and the audience.”
Adds Doshier: “If I’m just playing something back you don’t get the sense that you’re listening to something someone is expressing. It’s becoming a one-way street, where everything is pre-recorded and visuals are preset.”
“We’re trying to bring back that two-way connection,” continues Lopez de Arenosa, “bring the ‘live’ element back in and have everything triggered and produced in the moment by using things like looping, live sampling and triggering visual samples. That way the music can react to the audience, and the audience can interact with the music”
Using the Touch Designer computer originally built for the QI Recombinant Media Lab CineChamber, the performers will pre-program a set of instructions for how the audio and visuals will interact. Then, using playback controls for audio and visuals that are one and the same, they’ll leave the rest up to the in-the-moment playfulness of performing.
Doshier explains: “Let’s say David is singing and we capture that in audio waveform. We can use that data to generate the image of a circle and then change the way the circle is shaped depending on how the vocals change. Or let’s say I hit the drum pad with sticks and that sends out MIDI (Musical Instrument Digital Interface) signals. I can program the signals in real time so that each different part of the drum pad makes some video element pop up when I hit it.”
“In other words,” adds Lopez de Arenosa, “the systems that we’ve designed are an instrument we play.”
The performers also plan to play with ‘spacializing’ the sounds of their voices and instruments to make the sound as immersive as possible.
“For the listener, the sound won’t be ‘coming at them’—it will be a 3D sound experience,” says Doshier. “It will sound to them like they’re sitting inside the sound.”