I was contacted by the artist Tania Williard to create a program that could read data coming from a wind sensor installed by the shores of lake Ontario in Mississauga and map that data to words and senteces from a database. The words were all part of the artist`s research work.
The program was written in Python and reads real time wind data sent to a server by the sensor. It then maps a range of wind speeds to the word database and generates a poem four times a day. A total of 40 poems were generated during the exhibition and saved into a text file for the artist records. Each new poem was also displayed at the exhibition site in a browser via an HTML file generated by the script.
Digital Bell Tower
Digital Bell Tower is an audio installation that reads twitter data to sense the city of Toronto. It’s physical object is a papel marche-ish bell with a compartment to hold an arduino. A Python script searches twitter for the “:)” and “:(” emoticons to check if the city is currently happy or sad. The script then sends an OSC message to Max/MSP.
Max/MSP is used to generate the sounds and its code can be seen on the image above. The bell Arduino sends accelerometer data to the program that parses it and determines the XY position of the bell. That position is then used as in a vector synthesis: morphing between 4 bell samples. The happy and sad twitter information is used to determine how the program playbacks the samples. Happy plays the normal samples and sad plays them in a granular way that gives it a darker sound.
A second Arduino uses a distance sensor to trigger a light every time it moves on top of it.
However important appears
Your worldly experience,
It is but a drop of water in a deep ravine.
Using granular syntheses as both the basis for this audiovisual installation and as a metaphor to buddhist concepts, this project uses Max/MSP, ChucK and Processing to generate granular audio and visuals. The installation also has sand in front of the screen to augment the grain concept.
ChucK uses different water samples, chops them into grains ranging from 40 to 400ms and plays a random number of them, one after the other. The result is a sound that closely resembles the sound of water in nature.
Processing is used in a similar way, but chops a video file of Lake Ontario into ‘grains’. Each square grain is a section of a frame of the video. Thus it is not the video itself that is broken into individual grains, but the notion of time.
The installation is controlled by a MIDI controller. MIDI signal is received by Max/MSP which then sends it to both ChucK and Processing via OSC. MIDI is used to control the volume of the water drops, seagul sound and ambient synth. It also triggers the playback of the ChucK grain synthesizer and the various parameters of the Processing program.