We’ve had a real breakthrough today. One of the most important aspects of me and my shadow for me is that players should actually be able to create, to shape, to sculpt the environment with their bodies and gestures. Without this, it’s ‘just’ 3D telepresence (still a pretty exciting and new development, but not – I suspect – really unique to this project). I might have downplayed this aspect when talking about it in the early days because I really had very little idea how it might function, or indeed what it might look and sound like. But it’s really come together while we’re here, and for me that makes this residency totally worthwhile already.
For me, the idea really came together while thinking how the sound might function (see below). But I really also have to give Phill a lot of credit for some great brainstorming on this, as well as super-quick coding. His key idea was to combine the two things that the kinect can do – it can give you a reasonably realistic 3D ‘mesh’ of what it’s seeing (ie the shape of a person’s body), and it can give you a ‘skeleton’ – basically a set of points for the key joints of the body, from which a ‘stick-man’ model can be derived. Nobody much seems to be combining the two (though please feel free to correct me on that), but the combination seems to have a great deal of potential.
What you can see in the video here (apologies for quality, we’re just filming off a laptop screen) is a trail of glowing points – particles – left by each of the key skeletal points of the body (as mentioned, this is the exact same principal as I’m using with the sound). These leave perfect 3D trails of movement (think ‘nude descending a staircase’, sort of) which will be an ideal starting point for sculptural forms. The particles are smaller/less intense the faster you move – this might seem counter-intuitive, but it really works. If you walk quickly through the space you’ll hardly make anything happen, but if you stay still or move slowly the particles will slowly coalesce around you.