My visual interests stem from my desire to create immersive experiences and environments, and in creating mappings between aural and visual phenomena. I've done a variety of audiovisual work over the past couple of years, ranging from experiments in generative music and visuals, to live showcase performances, to developing audioreactive visuals for others, and more. Most of this work is done in Max/MSP/Jitter, but I have used other tools (Processing, Adobe Creative Suite tools, and others) as appropriate. This page documents a few examples.

A rolling FFT waterfall plot, displaying my piece Songs to Joannes VII (poetry by Mina Loy).

Tweets about music generating music and visuals.

A generative jazz fusion (audio) and jazz-meme (visual) composition. The music generation algorithms drive much of the visuals. This was a class project.

A small sampling of a generative 3D composition. Here the music is driven by the processes used for the visuals, and then reflected back into the scene. This was also a class project.

Video-reactive audio. The pitch and timbre of the bass synth depend upon the scene as a whole.

A simple editor and display for Csound in Max (with csound~). Intended to demonstrate synthesis techniques that evolve timbre over time, e.g. scanned synthesis.

Live Example

I was part of the live visuals team for the Voltage Connect conference concert at the Berklee Performance Center (along with Blake Adelman, Alejandro Gonzalez-Melo, and Harrison Sayed). This was one of the performances (featuring Jordan Rudess of Dream Theater), and this video shows some of our work.