Turn sound into images, how?: Data bending

Data bending: How to turn sound into images

How to turn sound into images? That’s a question that a lot of people ask me when they see my collection of images transsubstantiatio. The answer is really simple: data bending. This is an easy tutorial to turn sound/audio into images/photos.

Data bending is any kind of process that involves turning one type of media into another. In this case, sound/audio into images/photos. Data bending techniques generate glitches, so data bending is part of glitch art. But what I’m going to explain is also an audio visualisation technique, so it’s between glitch and data visualisation.

How to turn sound into images 1: Audacity

Audacity 440Hz sinewave.

First, you need an audio software that saves RAW files. You can use a lot of different softwares, for this tutorial I’ll use Audacity, which is cross-platform and open source, so you can download it for free. Open Audacity and then open or create an audio file: music, noise, a pure tone, whatever… It doesn’t matter.

I’ve generated a 440Hz sinewave (Generate > Tone > Waveform: Sine), but it works with any audio file, so choose whatever you want.

Audacity Export RAW

You have to export this sound as RAW: File > Export audio > Format: Other uncompressed files. Options… Header: RAW (header-less). In Encoding, I choose U-Law or A-Law because I think that the images generated using those encodings are more aesthetically appealing, but maybe you prefer another option. You’ll have to take your time to do some tests because the results are always unpredictable.

Now you have a .raw file. For the next part I use Photoshop, any version. I have tried to use other image editors, some of them work, but not all. Some image editors don’t recognise RAW files saved using Audacity.

How to turn sound into images 2: Photoshop

Photoshop RAW

Open your RAW file in Photoshop. You’ll see this window. Choose the dimensions, the only limitation is the size of the RAW file. If you choose very large numbers, you’ll see this message: Specified image is larger than file. If that’s the case, reduce Width and Height.

Channels Count is the colour information: 1 for black & white images, 3 for colour images. You can check Interleaved or not, usually I prefer to uncheck it because I like it better (just for aesthetic reasons). Click OK and that’s it!

440Hz Tone open as image

This is my 440Hz sinewave. I choose Width: 1000, Height: 1000, Channels Count: 3, and unchecked Interleaved. If you use other dimensions, even with exactly the same audio file, you’ll get another pattern.

If you use music instead of pure tones, the result will be completely different. Sometimes just noise, sometimes much more interesting. As you can’t predict the outcome, you’ll get a lot of uninteresting images. But, if you keep trying, you’ll get also wonderful visuals from time to time!

The Best Button Badge Designs of 2016

Button Badge Design by Blanca Rego

One of my data bending/glitch/sound illustrations/designs has been featured in an article published in the Digital Arts magazine website. “The Best Button Badge Designs of 2016 now available as prints”.

I found this by chance, and it’s a bit funny because I spent a lot of years translating articles for the Spanish version of that magazine. Anyway, my design is not available as print, but if you like it you can buy the button badge.

LTC016

This is a promo video for the next Lovethechaos release, a cassette in which I’ve been working during the last year. The video was generated from the sound, both things are the same files, so what you see is the same that what you hear.

More about the idea and the process from the promo sheet:

The 186 tracks of this cassette are 186 photographs of cassettes opened as sound. It is not data visualisation or a random connection between image and sound, but a series of JPG files exported as AIFF. That is, a direct digital translation based on the old idea that colour (image) and sound (music) share some similarities that allow the translation of one media into the other. Even it is not possible to save a JPG as AIFF, it is possible to save it as RAW, a digital format compatible with both image and sound editing software. This means that any digital image can be automatically transformed into sound, and vice versa, without carrying out processes of sonification, data visualisation or any other kind of random or arbitrary transformation.

The theory that states than colour/light and sound are intrinsically related comes from Pythagoras and Aristotle, who linked each colour to a specific music interval. Pythagoras linked each creative sphere to a tone, a harmonic interval and a colour, among other things, while Aristotle stated that:

“…these colours [viz. all those based on numerical ratios] as analogous to the sounds that enter into music, and suppose that those involving simple numerical ratios, like the concords in music, may be those generally regarded as most agreeable; as, for example, purple, crimson, and some few such colours, their fewness being due to the same causes which render the concords few. The other compound colours may be those which are not based on numbers. Or it may be that, while all colours whatever [except black and white] are based on numbers, some are regular in this respect, others irregular; and that the latter [though now supposed to be all based on numbers], whenever they are not pure, owe this character to a corresponding impurity in [the arrangement of] their numerical ratios.” On Sense and the Sensible, Aristotle.

These theories were expanded and rectified by many subsequent thinkers, scientists and artists, from Newton — who divided the light spectrum into seven colours, one for each note of the musical scale – to Arcimboldo — who tried to match painting with music.

Most of these theories are based on the fact that both light and sound are types of waves, so it is possible to establish direct analogies based on frequency data. Nevertheless, the range of frequencies in which light and sound coexist is quite limited.

The first practical audiovisual implementations of these theories emerged on the 18th century with devices as the Ocular Harpsichord (1725) by Louis Bertrand Castel, which later derived in instruments such as Clavilux (1919) by Thomas Wilfred and Sarabet (1922) by Mary Hallock-Greenewalt. In any case, the relationships between light and sound established in these instruments were arbitrary, not direct translations. The closest thing that we can find to a direct translation from image to sound are the optical sound experiments of experimental filmmakers such as Norman McLaren, Lis Rhodes and Guy Sherwin.

Before digital sound and analogous sound, motion pictures had an optical soundtrack, that is, the soundtrack was printed directly on the margin of the photographic film. Some artists took advantage of this fact to generate the soundtrack from the film images, instead of using a standard soundtrack, which lead to experiments as Synchromy (1971) by Normal McLaren, Dresden Dynamo (1971) by Lis Rhodes and Railings (1977) by Guy Sherwin, three films in which what you see is exactly the same that what you hear.

Such contemporary experiments were not only made by visual artists, some engineers and musicians built photoelectric instruments, as the ANS Synthesiser (1937-1957) and the Oramics (1957), that synthesised sounds from drawings. Other similar instrument from the digital era is the UPIC by Iannis Xenakis, a compositional tool based on a tablet in which the user draws waveforms and sound envelopes.

The technique used to generate this cassette is based on the same idea that optical sound and visual synthesisers, but it belongs to a recent digital genre called data bending. Data bending (or databending) are a series of methods that allow the transformation of a type of file into another type of file (for example, a sound file into an image file). Even data bending is related with glitch aesthetics, data bending errors are always provoked, they do not happen by chance. However, the result is always unexpected — since the transcodification is automatic or semiautomatic it is impossible to predict its consequences.

Video teasers for syn3rgy, ket3m’s new ep

Some time ago, Ket3m (Shay Nassi aka Mise En Scene & Tom Kemeny aka Darmock) asked me if I would like to do a promo video for the release of their new ep, entitled syn3rgy. Ket3m involves experiments in minimalism, abstraction and the deconstruction of acoustic mathematical patterns. The ep is yet to be released, but you can listen to one of the tracks in The Wire Tapper 32, included in The Wire 354 August issue.

Ket3m sent me two one-minute excerpts of a couple of tracks and, as I’m obsessed with the relationship between sound and image, I thought that it would be interesting to translate the sounds directly into images. I mean a real transposition, not an arbitrary interpretation. In order to achieve that, I chopped the tracks in very short fragments and I saved all that tiny sound files as jpg to use them as frames. The result are this two videos in which what you see are exactly the same files that what you hear.

Readings

• Viral Noise and the (Dis)Order of the Digital Culture [read]
Jussi Parikka

• Historia de la relación música/imagen desde Aristóteles a los videojockeys (I) [leer I y II]
Ana María Sedeño Valdellós

Microbionic: Radical Electronic Music and Sound Art in the 21st Century
Thomas Bey William Bailey

State Of The Union: The Synesthetic Experience In Experimental Music And Sound Art [read]
Thomas Bey William Bailey

• Microspores [read]
Jeff Noon

• Errormancy: Glitch as Divination – A New Essay by Kim Cascone [read]
Kim Cascone

Transsubstantiatio

I’ve always been obsessed with the relationships between sound and image, and databending is one of the easiest methods to transform a sound track into a still image. I like this approach because it’s an automatic transformation, not a prepared visualization. You don’t decide the outcome, it’s just the result of a mechanical process that you can’t control. Sometimes the resulting images are not very interesting, but sometimes are really mesmerizing.

I’ve been doing this kind of experiments for years and lately I though that maybe it would be interesting to show them in some kind of form, so I created Transsubstantiatio.