Archive

Monthly Archives: September 2011

That’s Turkish for ‘me and my shadow’ – I like it.

The last few days of the Istanbul residency were a bit intensive – too busy for blogging.. I’m back in England now, and the dust has settled a bit.  This is a quick summary of what happened at the end of our two weeks.

Over Thursday and Friday the MADE partners arrived, the group gradually snowballing in size. We/they had various meetings about MADE, me and my shadow and the white paper which is the ultimate outcome of the MADE project as a whole.  We also had some great social occasions – particularly memorable was our trip to the Asian side (wow, Istanbul is big) for a wonderful dinner overlooking the Bosphorus, with the ships and the calls to prayer echoing across the bay – magical.  Thanks Beliz!

In terms of me and my shadow, we spent the last few days (when we weren’t in meetings, on ferries or eating delicious meals) tweaking the visuals and in particular the sound.  I might post some more details in a while.  The residency culminated in a process performance at Galata Perform, right in the middle of the central Taksim area, with dancers Dilek, Steven and Banu from the Monday/Tuesday workshops.  I think it went pretty well – the tiny (and hot!) venue was packed to bursting with interested and perceptive (judging by the questions afterwards) people, and the dancers did us proud.

You can see some excerpts – actually, pretty much the whole thing – in the videos above. I didn’t get a chance to video the event, so thanks to Yigit for that (as well as all your other help over the course of the residency).

Many thanks to Aylin and all the boDig team for a wonderful residency – we had a great time in Istanbul, and I think we made a very good start on the project.

Today we had a workshop with some of the performing arts students at Bilgi University. Once again, a very long day so I won’t write much, but I think it went well. We didn’t change much with the system, but here are some short videos to illustrate some particular points. The bottom one shows the system without the particle trails – we found this a very good starting point to understand what’s going on, closely followed by a look at the skeleton tracking (second video down) to see the actual points that are producing the trails. The top video is there really to show the sound, which I tweaked quite a lot today. Still not quite happy with it yet, but it’s better. Maybe tomorrow we’ll actually get there!

Workshop days are a bit full-on, so I don’t seem to have so much time to write the blog. I seem to be a bit drained by the evening! Fortunately we’ve got some nice videos which I think somewhat speak for themselves, but here are a few memories, thoughts, observations:

We’ve now reached the end of two workshops with professional choreographer / dancers. These proved extremely useful in highlighting issues (strengths and weaknesses) with me and my shadow as it stands so far, and pointed up many ideas for future development.  Thanks, boDig and MADE, for making these workshops and this residency possible.  It’s been great to develop the project with so much input, thought and enthusiasm from others.

Yesterday we spent quite a long time setting up the space (quite tricky- the Kinect can be a fiddly little blighter).  We then spent most of the session going through some quite rigorous exercises with Ghislaine Boddington.  These are exercises, or games (because they were fun too!) that she’s developed primarily in telepresence projects, some of them with myself.  They really help in getting used to working with and through the camera, and the relationships with space, screen and others the situation throws up.  In this instance they highlighted both the similarities and the differences between what we’re doing and video-based telepresence.  The crucial one is that the ‘real’ camera (the kinect) and the virtual one are totally independent, so the viewpoint shown by the video can be anywhere, entirely at odds with the physical placement of the Kinect.  This is both extremely exciting and rather challenging.

You’ll see from the videos below that we kept the virtual camera pretty much static for the first workshop.  Today, having established a strong orientation with real and virtual spaces, we were able to free things up a little bit.  Here are some of the things we explored:

1) we turned off the ‘tracers’ (the particles that trail behind the skeleton points) and just focused on the actual representation of the body in the system.  We found that it’s extremely different depending on distance – close to the camera, the body looks quite solid, and really quite detailed – you can actually recognise someone, and facial details, clothes etc. are quite deliniated.  As you get further away you become much more abstract, and the inaccuracies of the Kinect much more pronounced.  Yes, it’s obvious, but we found some very interesting results juxtaposing near and far, and playing with the rather distorted depth of field of the Kinect camera.

2) we tried out various combinations of tracers.  Overnight, Phill programmed in the capability to turn them on and off.  We found that less is definitely more, and the most interesting point we found to track was the central point of the spine.  This was really interesting to discover.  I now have an embrionic idea that we should represent the various tracers differently – some more prominently than others.  I like the idea that there may be a ‘trunk’ tracer (the spine, say), with little filaments branching off it to represent the others.

3) It’s really interesting the way people and things appear and disappear. The Kinect is surprisingly fussy about this.  As you edge into frame you won’t appear until there’s enough of you visible for the Kinect to recognise as a human form, at which point you’ll suddenly pop into existence.  The reverse can happen too, and relationships between people and objects can do strange things – kind of turning each other off and on.  It can be an interesting phenomenon, and also does strange and rather satisfying things to the sound (yes, we have sound now, although it needs a lot of refining) as lots of points of sound appear or disappear at once.

4) the virtual camera is both the most challenging and the most interesting thing.  It’s fascinating to look at the body and physical movement from unexpected viewpoints, but it can also be very confusing.  Over the course of today, I felt that Phill developed quite a skill as a ‘virtual cameraman’, choreographing the movement of the virtual camera expertly and artistically with the dancers.  This makes things look much more interesting – the videos above (today’s) look far more dynamic and three dimensional than yesterdays when the camera was largely static.

Many thanks to the dancers – Beliz, Banu, Dilek and Steven.  Also to Ghislaine, Phillipe, Phill and Yigit.  You’ve all made these couple of days fun, rewarding and extremely worthwhile.

We’ve had a little bit of time off over the weekend.  Istanbul has been busy, and fun.  There are not one but two (at least) major arts events going on – the Istanbul Biennale and ISEA (International Symposium on Electronic Art).  We went along the the opening of the Biennale on Friday night which was fun (thanks Aylin and Yigit), and we’ve also gone to a few of the ISEA exhibitions as well as having a couple of nice dinners with a few ISEA types.  Wish we could do more really, but need to remember why we’re here and keep working..

There was one piece in the exhibition at the Cumhuriet Art Gallery that seemed quite relevant to what we’re doing – Proposition 2.0 by Mark Cypher (left). I think I said something at some point about me and my shadow, being ‘literally’ a sandbox environment – well this really is literally a sandbox. There’s a suitcase full of sand to play with, which is then picked up by the kinect (above) and projected onto the flat surface on the right and the little sand planet on the left.

We’ll keep tinkering over the weekend, but I’m pretty pleased with where we’ve ended up in the first week here in Istanbul – actually it’s only really been four days.  You can see the latest on the visual front above – basically it’s the skeleton particle trails shown below combined with the 3D mesh display, somewhat improved from the earlier versions shown a few days ago.

And below you can listen to an early version of the soundscape (play it with the video if you like, though of course it won’t be synced)  – this is basically just the ‘singing sand-dune drones’ emitted by the traces/sculptures.  The soundbursts themselves aren’t sounding very nice yet – something for the weekend..

This is very much the first first first first version, and I’ve no idea if the final thing will look and sound anything like this at all,  but it’s a start I think, and definitely something we can start to work with with the dancers on Monday..

 

We’ve had a real breakthrough today.  One of the most important aspects of me and my shadow for me is that players should actually be able to create, to shape, to sculpt the environment with their bodies and gestures.  Without this, it’s ‘just’ 3D telepresence (still a pretty exciting and new development, but not – I suspect – really unique to this project).  I might have downplayed this aspect when talking about it in the early days because I really had very little idea how it might function, or indeed what it might look and sound like.  But it’s really come together while we’re here, and for me that makes this residency totally worthwhile already.

For me, the idea really came together while thinking how the sound might function (see below).  But I really also have to give Phill a lot of credit for some great brainstorming on this, as well as super-quick coding. His key idea was to combine the two things that the kinect can do – it can give you a reasonably realistic 3D ‘mesh’ of what it’s seeing (ie the shape of a person’s body), and it can give you a ‘skeleton’ – basically a set of points for the key joints of the body, from which a ‘stick-man’ model can be derived.  Nobody much seems to be combining the two (though please feel free to correct me on that), but the combination seems to have a great deal of potential.

What you can see in the video here (apologies for quality, we’re just filming off a laptop screen) is a trail of glowing points – particles – left by each of the key skeletal points of the body (as mentioned, this is the exact same principal as I’m using with the sound).  These leave perfect 3D trails of movement (think ‘nude descending a staircase’, sort of) which will be an ideal starting point for sculptural forms.  The particles are smaller/less intense the faster you move – this might seem counter-intuitive, but it really works.  If you walk quickly through the space you’ll hardly make anything happen,  but if you stay still or move slowly the particles will slowly coalesce around you.

 

This post is for the more technically-minded.. Here’s the core of the (very rudimentary) Max patch I’ve build for the noisebursts and resonances.  Basically, an ‘event’ (ie an instance of rapid movement) will trigger the adsr~ envelope to let through a little burst of noise~ (I wanted a bit more flexibility than a simple hard impulse).  This is then fed through the comb~ filter to give it a tuned resonance, and the biquad~ (lowpass) filter to knock off some of the high frequencies.  This is then delayed and fed back on itself (which you can’t really see on this bit of patch – sorry) which will produce the very long resonancies and slow decay, with the sound getting duller with each iteration til it dies away.  The sound is spatialised using the ambipan~ object – this is only in the x and z axes as it’s a quadraphonic setup we’re using.  The y axis is currently used to control the pitch of the resonance – these pitches are arranged in a nice harmonic or sub-harmonic series controlled by r fundamental and r ratio.

This is only a little bit of the patch, and it’s wrapped up – with a few other bits and bobs – inside a poly~. This allows many of these noiseburst/resonances to be active at once – my processor seems to be able to handle about 150.

I’m aware that this will need to be coded from scratch for the final project (which will hopefully allow many more sounds to be active at once), so I’m keeping it simple. Noise generators, delays and filters should be reasonably straightforward to code, and the ambipan~ object comes with source code – thanks guys!

I’ve started work on the audio side of the project – I want to get at least some kind of interactive audio going for when we start to work with people in the space next week.  I find the audio side of this project a little harder to conceptualise than the visual, which is a little embarrassing given my background in music and sound. However, I’ve struck out in a direction today which I think is promising.

I think there need to be (at least) two types of sound at play in the space.  First, I imagine dynamic, short-lived sounds that are produced directly by the movements of the player, giving the instant gratification of immediate sonic feedback.  Secondly though I’d also like the ‘shadows’ – traces and sculptural forms left behind by players in the space – to vibrate, resonate.  I’m thinking of the way that sand dunes ‘sing’ in the desert wind (not that I’ve every actually heard that!).  I want the whole landscape to have a low level glow to it, only sonically.

This is what I’ve been building today, as a starting point: any sufficiently rapid movement in the space will trigger a little impulse – a little starburst of noise – localised at the point where the movement happened.  So as you move, you’ll leave a trail of sonic pinpricks behind you. Each of these tiny sounds will trigger a tuned resonance, and the resonance will feed back on itself, over and over.  What begins as a noiseburst turns into a singing resonance, which then – very slowly – decays into a low ambient hum and eventually disappears altogether.

I think this is an idea we might explore visually also, as we form an idea as to what the shadows should look (as well as sound) like.  It would be interesting if they had a ‘half life’, and decayed – or were slowly eroded – over time, like those sand dunes I mentioned.

The picture’s not directly relevant to this post of course, but I want to put some faces to names – that’s Phill, Tolger and Aylin at lunch.  Check Phill’s impressive ‘programmer’s tan’ – i think he threw the white balance on my camera.  Did I mention that the weather’s really nice here at the moment?