csPerformer was my undergraduate thesis project in the Electronic Production and Design department at Berklee. Essentially, it's a prototype for a system that is intended to facilitate composition through performance, or composition in a performative manner. It does this by allowing you to sing, play, or otherwise make pitches and sounds into the system to give yourself materials to compose with in real-time. In addition to acting as compositional interface for performers, it is intended to bring physicality to compositional actions, or allow compositional decisions to be expressed physically.
A quick demo of csPerformer.
It is written for the iOS platform, primarily in Swift 3 with Csound as the sound synthesis and audio DSP engine, so as to be portable and, as such, adaptive to a variety of compositional and performative-improvisational contexts. It consists of three primary modules, and two auxiliary ones:
The accompanist, the system's main module, provides a space to input pitched musical information, and operate with this information using tools more commonly found in systems for making electronic music. Of these, perhaps most prominently featres is a sequencer. This allows you to, for example, sing or play or otherwise produce (up to ten) pitches into the system in order to place them in the sequencer. It gives you control over the duration and level of each sequencer node (i.e. each pitch), and allows other interactions - like distinguishing between free and tempo-driven time, a control for the density/sparsity of note events in free time, and the ability to play the sequence in forward order, in random order, or harmonically.
Transforming a sequence by singing input pitches 'gesturally' in a pattern rather than note-by-note.
The shuffler reinterprets phrased musical input. It has a system for detecting musical phrases in normal playing, and will automatically detect and store up to thirty discrete phrases. It offers control over the texture of the output (the maximum number of simultaneously sounding phrases), the density of the output in time, as well as the threshold for phrase detection, to account for differing acoustic environments.
The Shuffler module operating on a recording of one of my older pieces.
The metamorphosizer seeks to expand a performer's timbral palette in real-time. It provides the ability to spectrally morph variously between two audio signals, the primary signal being from the performer. The secondary signal can either be synthetic, in which case the performer can choose from a variety of sounds, or external. The synthetic source will detect input pitch and play a synthesizer or a sampler that is pitch-matched to the input. So a singer could, for instance, interpolate between their voice and the timbre of a flute without altering too many parameters of their performance itself. The external source allows any signal to be fed into the application for interpolating between the two sound sources.
These two are modules I have additional plans for.
The first of the two supplementary modules is titled Wordsmith. It is intended to facilitate real-time compositional interaction for those who work with words within their musical milieu. It consists of two parts, each adapted from existing iOS systems. The first is a thesaurus, adapted to allow verbal input in order to transform the experience of searching for the right word(s) into a real-time action. The second is a primitive poetry generator, intended to offer topical inspiration. The poetry generator model was intended to eventually evolve to account for musical-stylistic parameters, using lyric datasets that I've already prepared.
The second is a simple global FX module. It provides four audio effects: saturation, delay, reverb, and an equalizer, and allows them to be tactually rearranged (by holding and dragging). The part of this that is not yet realized is an adaptive FX controller, that is able to automate parameters of these FX using information derived from the performance in real-time.
Here are two brief études composed with the system by singing, whistling, playing, speaking, and otherwise making sounds into it. One features microtonality, and the use of all five modules, while the other (the 'pop' etude) primarily features the Accompanist and Shuffler modules (with the FX module). In both cases, the pieces consist of 2-3 layered improvisations (a kind of virtual csPerformer 'ensemble').
Although my primary musical activity is as a composer, I was a regular performer on guitar at one time in a range of styles. Having recently returned to performing on guitar and/or laptop, I have found that there is an immediacy to the interface and the act of musical performance that is quite different to the experience of composing. As a composer, when writing acoustic music, I hear primarily internally and tend not to use an instrument or any direct sensual contact with music to facilitate writing, and as such, I think this distinction was especially apparent to me.
csPerformer wasn't intended to be a comprehensive, all-encompassing answer to the questions that such a distinction might raise, but rather a prototypical model for an interface that seeks to address these questions and also provide a different experience for music-making to a wide range of people, whether they are professional musicians, children with no experience, or anyone in between.
I had also, at the time, been exploring some literature of interest to this experience, some of which prompted questions that variously led me to parts of what became this project. For example, although I had already begun working at this point, reading about Nilsson's1 distinction of design time and play time, and experiencing as an improvising performer in a presentation of his and Palle Dahlstadt's 'bucket' system, further called my attention to how compositional and performative-improvisational activity each relate to the passage of real-time (or "absolute time") in comparison to the passage of musical time. For example, a composer might spend an hour on a three-second long measure if so inclined, or rather, the quantity of time required to compositionally execute a certain quantity of musical time is non-specific. Conversely, a performer is generally given a window in which to performatively execute a given unit of musical time. An improvising performer, furthermore, must also participate in elements of the compositional process in this kind of performative-time.
Two models of interest from (ethno)musicologists, especially as they contextualized social concerns involving this distinction of experience, included Christopher Small's2 musicking, or the idea that music is itself an action, and Thomas Turino's3 fields of music-making. For example: How flexible can the paradigm of the composer be in a spectrum of music-as-activity that is oriented towards the social and physical action of performance, or providing materials therefor? How can new interfaces and technology mediate between presentational music-making, and studio audio art? Of course the idea of music as a physical activity (to take a cue from embodied cognition) does not imply that composers or compositional activities are exempt from a sense of physicality, but it is still perhaps interesting to ask how this can be actively realized in practice, and how the differences in compositional and performative musical interaction can be interpolated between with new technologies.
I chose the iPad as my primary target because while it is able to facilitate sensual music production experience, as it does every day with such a wide variety of apps, it is also ubiquitous enough that the format of presentation seems unlikely to itself be intimidating. In other words, I wanted to extend the natural interface of touch-based interaction so as to take advantage of the immediacy of another common interface. Furthermore, the iPad itself occupies a curious space in the realm of computing devices, being a hybrid between the two dominant, different-but-converging paradigms of smartphone and computer, in that its social role is a little more ambiguous than either of its cousins, and indeed its purpose itself seems exceptionally variable. Finally the iPad specifically (as opposed to the tablet generally) offers excellent audio performance, and this is a great help in building a system that is seamless and immediate to use.
My hope was and is that this will act as a foundation for research I hope to continue in the future, on developing systems for addressing the compositional interests of performers, the performative interests of composers, and the musical interests of anyone.
Here's a playlist of videos of the system in action informally. Features me getting it to glitch in time (a 'feature' I decided to leave in there after learning how great it could sound), my sister, a singer, trying it out (if skeptically), and more.
Nilsson, Per Anders. A Field of Possibilities: Designing and Playing Digital Musical Instruments. Diss. Gothenburg: University of Gothenburg, 2011.
Small, Christopher. Musicking: The Meanings of Performing and Listening. Wesleyan University Press, 2010.
Turino, Thomas. Music as Social Life: The Politics of Participation. University of Chicago Press, 2008.