Start working on your project:
• Refine your prompt: who is your project for? Are you a part of this group of people? Is someone close to you (a classmate, friend or family member)? Can you find someone to play test your project and give you feedback?
• Make an aural mood board: a collection of reference audio clips. This should be in a format that you can easily share with the class in under two minutes (include short clips rather than full pieces or recordings). You can annotate them to indicate why you included each clip.
• Create a musical user path: what does it sound like to experience your project, to one user, during one particular interaction session? If you like, you can add a second path to illustrate a range of possible experiences. Whatever process you use to create each path, the result should be a rendered audio file. Remember this is a first draft: the idea is that you start considering duration, musical elements, arcs, sections, and so on. There is no expectation of production polish at this point.
Follow-up in the idea
This weekend I watched a performance that relates a lot to what I am proposing here. That was “Fractal Fantasy” by Zora Jones and Sinjin Hawke at Life and Death X Sónar, in which they used a Kinect to create visuals, and often times to create sound too.
Their system also seems to work in tracks, having different interactions for each song. I really enjoyed the performance’s vibe, and I would appreciate performing in an ambient like that one. But still, thinking through the kind of sound that I am more interested in, I am not so sure that would be the case.
As a response to the first question in this assignment, my project is a performance tool, so the user is most likely to be myself. It would certainly be interested in play testing with professional dancers, but my interest lies in the hybridism of this piece, therefore being the relation between human and machine more important than any specific skills. As for audience, as I mentioned previously, the ambient of Sónar seems like a good ambient, but that will depend on outcomes that I am not interested in trying to foresee. For sure, this piece belongs to artistic venues, audiovisual events, and maybe even dance festivals.
I feel like most of the examples of similar mechanisms I found online explore either the connection between sound and movement (example #1), or movement and visuals (example #2 – shown below), or even just sound and control, without any specific interest in the movement itself (example #3). Also, more often than not, it seems like the movement, the sound, and/or the visuals are separate pieces that are juxtaposed in space (example #4).
With all that, here is my rephrased prompt:
My intent is to create a dance-based audiovisual instrument, technically focused in applying the math of music and visual programming to a logic set of rules that can be understood and embodied by the performer. The resulting performance should be balanced in such a way that sound, light and movement become one single material, in opposition to juxtaposed layers of meaning and sensation. The system should be track-based, having different functionalities in each one. For example, in track 1 the distance between the two hands of the performer sets the intervals in a piano chord, while in track 2 it changes the size of shapes in the projection.
Aural mood board:
As I mentioned before, I believe there might be different approaches as to where the piece would be presented, and I can’t really foresee what kind of sound will this result in. So, this aural mood board is more about the atmosphere than the sound itself. Basically, I believe the sound should be similar to this in atmosphere, but “dancier” — whatever than might mean.
I made a Deezer playlist for this, from which I’ll set smaller parts to be presented in class if need be. Anyways, these albums are a good reference for what I want it to sound like, and the selected songs are only to show the different possibilities that I would want to get into, but still similar in style:
A possible user path:
Since I cannot count on the image of a camera to track the movements of the performer with the projections on them, the simplest road to go down seems to be using a Kinect. I am still not sure what programming language would be the best to go forward with this project, but I know Kinect would give me XYZ locations for the body joints, so for this assignment I used the XY locations given by PoseNet to prototype the interaction that, hopefully, will become much more complicated at some point.
The simplest relations to be made in this interaction are: X value of a point, Y value of a point, distance between two points in the body, and distance between a point in the body and a reference point on the screen. For this prototype, I worked with a small set of inputs or triggers that outputted a score for me to translate into the sound file. Those are:
• X position of the nose — synth 1 pitch input;
• Y position of the nose — synth 2 pitch input;
• If Y value of one of the shoulders goes higher than the other — beat 1 trigger;
• Distance of wrists — piano interval input;
• X difference between left wrist and elbow — synth 1 volume input;
• X difference between right wrist and elbow — synth 2 volume input;
• If Y value of one side of the hip goes higher than the other — beat 2 trigger
An *amazing* dance performance that I gave in my room, resulted in this score, which I used as a base to compose the piece below. I didn’t know how to do it in a practical way with code, so I actually had to look up which midi note each pitch referred to, and then put them by hand in the correspondent beat in Ableton. That is the reason why I only got a few seconds far into it, but I feel like it gives the feel of weird math given by this code. Also, the volume setting didn’t work very well, because most notes were inaudible in the volume given as output, so I chose not to use that, which made in sound even less dynamic that it is. Anyways, I don’t like the way it sounds, and I will certainly focus in few instruments, way more dynamic and some effects into them.
I did a screen recording, and something went wrong with the file in a way that I really appreciate, so I made a video with it to illustrate the interaction.