Phill has been walking around with furrowed brow today. He’s been spending a lot of time scribbling diagrams; staring at boxes, and chairs, and boxes on top of chairs. I’m a bit worried about him.

What’s been vexing him is the particularly knarly problem of combining data from several Kinects. A few people have used multiple Kinects, but I haven’t seen much work where the Kinects are actually pointed at the same thing (or box, or chair, or person). The problem is that many points will actually be seen by both (eventually all three) cameras, so combining the two sets of data requires a kind of 3D jigsaw-puzzle thinking that hurts.  For the full technical detail on this see Phill’s own blog.

On a more satisfying note – I think we’ve cracked the problem we had with the navigation in London. This proved so tricky and sensitive that even highly trained dancers had trouble with it. I’m happy to say that we’ve made a lot of progress on this over the last couple of days – it’s now much smoother and ‘steadier’ somehow, and can be easily adjusted to calibrate the sensitivity. We won’t know for sure until next week when we try it out with a few more people, but I’m hopeful it’s sorted now.

So.. having said we’d focus on the aesthetics this time, we do seem to have spent a bit of time here in Mons on technical issues. They’re kind of big though, and do impact on the aesthetics to a certain extent. It seemed crazy not to deal with them while we had a bit of time. In the meantime I have been making great strides with the sound – more v. soon.


This post is for the more technically-minded.. Here’s the core of the (very rudimentary) Max patch I’ve build for the noisebursts and resonances.  Basically, an ‘event’ (ie an instance of rapid movement) will trigger the adsr~ envelope to let through a little burst of noise~ (I wanted a bit more flexibility than a simple hard impulse).  This is then fed through the comb~ filter to give it a tuned resonance, and the biquad~ (lowpass) filter to knock off some of the high frequencies.  This is then delayed and fed back on itself (which you can’t really see on this bit of patch – sorry) which will produce the very long resonancies and slow decay, with the sound getting duller with each iteration til it dies away.  The sound is spatialised using the ambipan~ object – this is only in the x and z axes as it’s a quadraphonic setup we’re using.  The y axis is currently used to control the pitch of the resonance – these pitches are arranged in a nice harmonic or sub-harmonic series controlled by r fundamental and r ratio.

This is only a little bit of the patch, and it’s wrapped up – with a few other bits and bobs – inside a poly~. This allows many of these noiseburst/resonances to be active at once – my processor seems to be able to handle about 150.

I’m aware that this will need to be coded from scratch for the final project (which will hopefully allow many more sounds to be active at once), so I’m keeping it simple. Noise generators, delays and filters should be reasonably straightforward to code, and the ambipan~ object comes with source code – thanks guys!