Various other things I'm fortunate to have been a part of.
by Nona Hendryx & Dr. Richard Boulanger
The Sound of Dreaming is a musical-theatrical work written by and starring Nona Hendryx and Dr. Richard Boulanger. In it, a researcher seeks to steal the voice of a celebrated singer and infuse his machines with it, in order to become invincible, and a variety of ideas are explored along the way. It has been presented twice, once at Moogfest in Durham, NC on May 19 2017, and three months later at Mass MoCA's Hunter Center in North Adams, MA on August 19, 2017.
In the first and inaugural performance at Moogfest, I acted as DJ, performer, cast member, and technologist. I played (along with three others) a member of the lab, and by extension, the ensemble. I acted as the primary DJ, running and remixing the backing tracks including some of Nona's music from decades ago, some more recent work of hers with Paul Haslinger, and a new track for the show produced by Blake Adelman entitled We Are an Ocean. I also performed on GeoShred, which is about as close to a guitar as one can get without a guitar. Given that we were already traveling with a ludicrous quantity of gear, the last thing I wanted to do was add more! I also developed, fixed, and updated various pieces of software for the show. This performance was also captured with four 360º VR cams by a team from Duke university, and subsequently presented in part during Moogfest. Two audience videos from Moogfest:
In the second iteration, along with an expanded and further developed plot, we worked on a whole lot more music and a whole lot more (we were running 22 laptops on stage!) tech, as well as a new character (the scientist's daughter). I acted as primary producer for Upload Your Soul, a fiery industrial rock tune in which the evil scientist expresses his desires, and co-produced Nona's big new finale, Today is the Day, with Blake Adelman, each with words by their respective singers. Blake and I also co-produced a lullaby that acts as part of the plot, written by Dr. Boulanger. In addition, I ran recording sessions, worked on, updated, fixed some more software, and this time I brought a guitar! I performed on guitar in two tracks, drawing upon my years as a teenage shredder, and continued to perform on GeoShred for others. This time we also had on-stage several cameras feeding into a system Blake and I built to mimic a multi-cam crew, as well as Blake's own system for music-controlled lighting with lights under each of the 9 stations (or 'islands' as we called them, each hosting a different area of technology ranging from modular synths to circuit bent toys to "DJ Island" to leap motions, p5 gloves, wiiMotes, Chris Konopka's analog video synthesis station and more). One of the biggest differences with this second performance was that it was presented as part of artist Nick Cave's Until, and as such it featured Nona adorned with some of his incredible work.
Quite an experience! The team, altogether, was:
In addition, the show included tech developed by multiple generations of Berklee students.
by Blake Adelman
As part of Kidzapalooza 2017, a kids' festival within Lollapalooza Chicago, Blake Adelman designed and presented a live music video experience for kids with the Berklee Popular Music Institute. It involved an interactive music, lighting, and video software system and an array of musical controllers including a Roli Seaboard, a Pop'n controller, two MidiFighter 64s, and two Roli Lightpad Blocks.
My role in this project was primarily to build the interactive music system to spec, so as for children to be able to explore four key styles of popular music (pop, rock, hip-hop, and EDM) without having to worry about playing in time or in tune. It was a system that imposed stylistically appropriate musical constraints, so that kids could focus on having fun while experiencing making music. It enabled various modes of engagement, including an educational mode in which a song could be built up by the performer piece-by-piece. Additionally, I was on the ground in Chicago for the first two days of the presentation, helping to get things going.
The system as a whole was designed to allow both freestyle 'jamming', as well as a song-mode in which the performer would interact with the system as it varied over a song-structure, while a large interactive lighting system responded to them, and a video system recorded auto-managed multi-angle footage. It was certainly fun to work on, but it ended up being a lot of fun for the kids! Below you can see one video of us setting the booth up in Chicago (music is Robot Cry by Blake Adelman) and one of me playing with the very first version of the prototype for the interactive, generative music system.
A few other fairly recent performances I've been a part of.