An interactive audio gallery in augmented reality.

Soundscaper is an approach to placing and distributing music and sound in physical space in a way that can be interacted with, using augmented reality.

It presents a model, inspired by interactions more common to the visual arts, for musical and multimodal composition, as well as for user interactions with music, sound, and space that extend contemporary practices in these media. It is spatialization not in the sense that the origins of sound are distributed in space, but that the experience of them is.


These systems are built for iOS using Apple's new ARKit framework, with Scenekit for 3D drawing, and AudioKit as the primary audio engine. In addition to ARKit being so powerful a technology that offers tremendous opportunities to developers, iOS devices are ubiquitous and are fundamentally social devices. It provides an accessible system for user-dependent spatial interaction.

There are two versions of the app: one that allows mapping pitches to an arpeggiator that exists distributed in space, and one that allows sound-loops to be directly formed.

In the former system version, a user-produced pitch is detected when the screen is tapped, and then input at the current position of the device in space, as a sphere. As the user adds more pitches, they are essentially creating a looping sequence that can be explored by walking around it, looking at specific nodes, etc. - the amplitude of each node as it plays back depends upon the user's distance from it. This is a format that seeks to model musical sequences and ideas in space, such that they can be explored in a way a painting or sculpture can be explored - by walking around it.

In the second version, a user can produce some sound, perhaps through singing, speaking, whistling, playing an instrument, or any other means, and this will be added along with a particle system positioned at the point in space where the sound was done being produced. Again, the user's position defines, at any given point in time, the relative amplitudes of all the sound-nodes, as well as the "birth rate" of the particle system, or the rate at which particles are emitted at the location of each sound (the purpose being to inform the user of what they are hearing). This allows the curation of a user-defined soundscape, and ultimately provides a model for sound composition in space in which spatial interaction can impact any number of parameters of sound.


There is much that differs between interactions germane to music, and to the visual arts. Once an avid student of street photography, and before that of painting and drawing, it has always struck me how different being in a gallery is to being in a concert space. Even in galleries occupied in part by sound, there is work contextualized immediately by other work. There is an extraordinary diversity of experience, given the diversity of work and of viewers present. There is also some direct realization of the relationship the work has to its space, its physical contexts.

It is also not uncommon for personal living spaces to be adorned with visual art. In this sense, curation becomes the task of those who occupy the space, and in presenting pieces of fine-art to guests (and indeed, to themselves), they are realizing a space which necessarily extends beyond utility.

This paradigm is interesting to me, one of many musicians who finds a great deal of inspiration in galleries, and in wondering about how to adorn space with sound-fixtures, and unobtrusively curate a sound-experience of a space, I turned to augmented reality.

Of course, another considerable advantage offered by augmented reality is that it can be combined with more conventional experiences, like live concert performance. For example, as many artists have taken to using hardware and software loopers to build their accompaniment live, this system can extend that approach with an additional spatial dimension, which can be used to affect any existing parameters of the process as well.

Future Plans

I'm working on building an API that aims to facilitate the direct and meaningful linking of spatial parameters from AR-world-interactions to audio parameters. I also hope to build musical demos of this system, in order to encourage composers and sound artists to think of spatial implications for the work, and empower these to be put into effect directly.