During last month, I spent most of my time making a series of audiovisual pieces entitled Computer Music Studies. The sound, and the title, are by Mikel R. Nieto, who provided me with twenty tracks. This work was made for Nokodek Festival.
The initial idea was to make a series of audiovisual pieces based on digital feedback. That is, using configurations in which the audio output returns to the audio input, generating an internal digital feedback which usually is nonlinear and difficult to control. The sound was generated in an autonomous way using different configurations of the same pattern. (I don’t know which audio software used Mikel, if you’re interested in the sound tracks you should ask him).
The video track is not feedback, but data bending, which I guess could be understood as a kind of digital feedback because it uses a stream of data—in this case audio data—to generate another kind of file—in this case an image file. It’s not feedback, obviously, because I’m not routing the output back as an input, but in certain sense I’m routing the ‘output’ data of one format as an ‘input’ for another. In any case, it’s not generative, it was made frame by frame.
What I did was splitting the audio tracks in fragments of around 41.67 milliseconds—the equivalent to one video frame—using Audacity. I saved all those files as .raw, then I opened them in Photoshop, and I saved them as .jpg. So, what you hear and what you see are exactly the same data.
This video is just one of the twenty pieces, the complete work is around 40 minutes. It can’t be played live, because the data bending part is almost ‘handmade’—the conversion phase is not automatic, it’s painstakingly slow—, so even it was made for a music festival it’s more like a series of films than ‘live cinema’. Machine music for machines.