Play Space


A musically activated space that responds to your movement. 

banner_draft_img_4897_2

PlaySpace was featured in the 2015 Tribeca Film Festival’s Interactive Playground!

We have created a space that plays music as people move within and through it. It is simple enough to attract and engage novice users, and yet complex enough to reward sustained interaction. It can even be played like an instrument. Multiple people can play together at once, and the space responds to each individually. As players explore the sounds in the space, interesting movement arises. In a space that is usually used for other things, music emerges from the incidental movements of everyday use.

A white tape box on the floor denotes the playing area, and the one word “Play” provides all the instructions. Passersby either see the box and instructions and are curious; or they unknowingly walk through the space and hear the music that results.

The experience is all the more magic because the technology is virtually invisible.  There is no screen, and no physical controller.  Just a space and a pair of speakers.  By allowing the participants to be unaware of the technology we encourage them to concentrate on the sonic and somatic experience.

PlaySpace from Matthew Kaney on Vimeo.

 

Play Space was created by David Gochfeld and Matthew Kaney.  We are both musicians interested in new ways of generating music. We wanted to explore the possibility of generating music from movement — either of unknowing people passing through the space, or of choreographed movement generating its own score. We also wanted to see if interesting choreography would emerge from the movements of users playing the space together.

How It Works

An overhead kinect camera detects participants in a space, and converts their movements into music. We use SimpleOpenNI in Processing, and a custom-coded blob-tracking algorithm to identify people, assign them Midi channels, and convert their movement to Midi signals. Midi is then fed to virtual instruments in Logic. Since each person corresponds to a separate midi channel, they can each have distinct instruments, the parameters of which can be controlled independently.

We placed the Kinect overhead to be able to respond to multiple people moving through the space while avoiding the possibility of people occluding each other.  We wrote our own blob-tracking code to afford many different ways of controlling the music.   It is fast enough to maintain a reasonably high frame rate, so the music appears to respond immediately to movement.

Musical Mapping

The key to the project lies in how movement maps to music. We wanted to create affordances that were easy to discover and understand, and allow enough variety and degree of control over the music parameters to be interesting. It should be easy for unfamiliar users and non-musicians to discover the mappings and make musical-sounding music from it; at the same time, we also wanted to encourage sustained engagement by more musically-knowlegable participants, which requires a more sophisticated degree of control to be learned. And on top of that, we wanted multiple people to be able to create music together.

In user testing we found that the first kind of movement that users tried in the space was to simply walk across it, in one or the other axis. This happens to be the same kind of movement observed in unintentional users (passersby). This movement needed to correspond to a clear and strong musical effect. We decided to map the floor into a grid of pitches, so that movement in either axis resulted in playing a sequence of notes.  We found that placing interesting intervals adjacent to each other is more likely to produce interesting music. At present, if you move along the y axis each note is a major third apart, and along the x axis you move a major second. We’ve also tried moving a fifth in one direction, and a third in both directions, and using a major pentatonic scale instead of the diatonic scale, but users seemed to have the best experience with the current mapping.

To start a note playing, the person just needs to move: a shift of their center of mass by a certain amount in any direction (including up/down) will trigger a note. An increase in the size of their bounding box by 10% will also trigger a note — so you can stand in one place and trigger notes by extending your arms. As long as the bounding box remains greater than a certain size, the note will sustain (if you have an instrument that affords sustain). If your center point does not move, and your bounding box is below a certain size, the note you are playing will end. To be quiet, you need only stand still with arms at your sides.

Once users understand that movement affects music, many of them will try flapping their arms to see what happens. So we wanted that to have an effect on the music. Instead of trying to identify specific limbs we just use the dimensions of the bounding box. As the size increases along the x axis, the midi velocity of the note changes — this is effectively another way to control your volume, as higher velocity generally means louder attack. As the size increases on the y axis, the pitch changes. Extending your arms (or, more precisely, your bounding box) along the y axis will increase your pitch by semitones from the base pitch you are standing on. For novice users, this gives the ability to add a flourish to their playing; for sophisticated users, this allows them to make more complex music by extending their arms in space — a virtual trombone.