ltc016 (Live)

ltc016 is a live performance based on a cassette released by Lovethechaos. The sound was generated from photos of vintage blank cassettes, and the video was generated from the sound — both things are the same files, what you see is the same that what you hear.

The complete live performance is around 55 minutes.

The 186 tracks of the original cassette are 186 photographs of cassettes opened as sound. It is not data visualisation or a random connection between image and sound, but a series of JPG files exported as AIFF. That is, a direct digital translation based on the old idea that colour (image) and sound (music) share some similarities that allow the translation of one media into the other. Even it is not possible to save a JPG as AIFF, it is possible to save it as RAW, a digital format compatible with both image and sound editing software. This means that any digital image can be automatically transformed into sound, and vice versa, without carrying out processes of sonification, data visualisation or any other kind of random or arbitrary transformation.

The video was generated using the same methods, exporting each millisecond of AIFF sound as RAW and then as JPG. It is not a generative work, all sounds and images were “made by hand” and the live performance is controlled also by hand following the alphabetical order of the brands of the original cassette images.

The theory that states than colour/light and sound are intrinsically related comes from Pythagoras and Aristotle, who linked each colour to a specific music interval. Pythagoras linked each creative sphere to a tone, a harmonic interval and a colour, among other things, while Aristotle stated that:

“…these colours [viz. all those based on numerical ratios] as analogous to the sounds that enter into music, and suppose that those involving simple numerical ratios, like the concords in music, may be those generally regarded as most agreeable; as, for example, purple, crimson, and some few such colours, their fewness being due to the same causes which render the concords few. The other compound colours may be those which are not based on numbers. Or it may be that, while all colours whatever [except black and white] are based on numbers, some are regular in this respect, others irregular; and that the latter [though now supposed to be all based on numbers], whenever they are not pure, owe this character to a corresponding impurity in [the arrangement of] their numerical ratios.” On Sense and the Sensible, Aristotle.

These theories were expanded and rectified by many subsequent thinkers, scientists and artists, from Newton — who divided the light spectrum into seven colours, one for each note of the musical scale – to Arcimboldo — who tried to match painting with music.

Most of these theories are based on the fact that both light and sound are types of waves, so it is possible to establish direct analogies based on frequency data. Nevertheless, the range of frequencies in which light and sound coexist is quite limited.

The first practical audiovisual implementations of these theories emerged on the 18th century with devices as the Ocular Harpsichord (1725) by Louis Bertrand Castel, which later derived in instruments such as Clavilux (1919) by Thomas Wilfred and Sarabet (1922) by Mary Hallock-Greenewalt. In any case, the relationships between light and sound established in these instruments were arbitrary, not direct translations. The closest thing that we can find to a direct translation from image to sound are the optical sound experiments of experimental filmmakers such as Norman McLaren, Lis Rhodes and Guy Sherwin.

Before digital sound and analogous sound, motion pictures had an optical soundtrack, that is, the soundtrack was printed directly on the margin of the photographic film. Some artists took advantage of this fact to generate the soundtrack from the film images, instead of using a standard soundtrack, which lead to experiments as Synchromy (1971) by Normal McLaren, Dresden Dynamo (1971) by Lis Rhodes and Railings (1977) by Guy Sherwin, three films in which what you see is exactly the same that what you hear.

Such contemporary experiments were not only made by visual artists, some engineers and musicians built photoelectric instruments, as the ANS Synthesiser (1937-1957) and the Oramics (1957), that synthesised sounds from drawings. Other similar instrument from the digital era is the UPIC by Iannis Xenakis, a compositional tool based on a tablet in which the user draws waveforms and sound envelopes.

The technique used to generate this work is based on the same idea that optical sound and visual synthesisers, but it belongs to a recent digital genre called data bending. Data bending (or databending) are a series of methods that allow the transformation of a type of file into another type of file (for example, a sound file into an image file). Even data bending is related with glitch aesthetics, data bending errors are always provoked, they do not happen by chance. However, the result is always unexpected — since the transcodification is automatic or semiautomatic it is impossible to predict its consequences.

Live
Mutek (Barcelona, Spain), 8 March 2014.
Caudorella (Barcelona, Spain), 21 March 2014.