SoundScan was created for a very short assignment of one week. The subject of the assignment was audio to video transmutation, or video to audio transmutation. This practically means translating video to audio, or the other way around. There are many ways to do this and they all create different results, which is why I started out with some experimentation.

I decided to create something that'd allow the user to look at something visual in a different way, or, rather, listen to something visual. SoundScan is a device built to do just that. A camera inside of the arm records anything that the user puts under it. The camera then sends this information to a Processing sketch, which reads the colour and lightness information of the picture, and translates it to audio.

I use SoundScan to look at my own sketches in a whole different way. I often feel like I am getting stuck because all I am doing is looking at sketches that I've previously made, by using SoundScan I can listen to them, instead. Sound has a majorly different impact on humans than visuals do, therefor SoundScan is able to give the user new insights.