Monthly Archives: September 2011

That’s Turkish for ‘me and my shadow’ – I like it.

The last few days of the Istanbul residency were a bit intensive – too busy for blogging.. I’m back in England now, and the dust has settled a bit.  This is a quick summary of what happened at the end of our two weeks.

Over Thursday and Friday the MADE partners arrived, the group gradually snowballing in size. We/they had various meetings about MADE, me and my shadow and the white paper which is the ultimate outcome of the MADE project as a whole.  We also had some great social occasions – particularly memorable was our trip to the Asian side (wow, Istanbul is big) for a wonderful dinner overlooking the Bosphorus, with the ships and the calls to prayer echoing across the bay – magical.  Thanks Beliz!

In terms of me and my shadow, we spent the last few days (when we weren’t in meetings, on ferries or eating delicious meals) tweaking the visuals and in particular the sound.  I might post some more details in a while.  The residency culminated in a process performance at Galata Perform, right in the middle of the central Taksim area, with dancers Dilek, Steven and Banu from the Monday/Tuesday workshops.  I think it went pretty well – the tiny (and hot!) venue was packed to bursting with interested and perceptive (judging by the questions afterwards) people, and the dancers did us proud.

You can see some excerpts – actually, pretty much the whole thing – in the videos above. I didn’t get a chance to video the event, so thanks to Yigit for that (as well as all your other help over the course of the residency).

Many thanks to Aylin and all the boDig team for a wonderful residency – we had a great time in Istanbul, and I think we made a very good start on the project.

Today we had a workshop with some of the performing arts students at Bilgi University. Once again, a very long day so I won’t write much, but I think it went well. We didn’t change much with the system, but here are some short videos to illustrate some particular points. The bottom one shows the system without the particle trails – we found this a very good starting point to understand what’s going on, closely followed by a look at the skeleton tracking (second video down) to see the actual points that are producing the trails. The top video is there really to show the sound, which I tweaked quite a lot today. Still not quite happy with it yet, but it’s better. Maybe tomorrow we’ll actually get there!

Workshop days are a bit full-on, so I don’t seem to have so much time to write the blog. I seem to be a bit drained by the evening! Fortunately we’ve got some nice videos which I think somewhat speak for themselves, but here are a few memories, thoughts, observations:

We’ve now reached the end of two workshops with professional choreographer / dancers. These proved extremely useful in highlighting issues (strengths and weaknesses) with me and my shadow as it stands so far, and pointed up many ideas for future development.  Thanks, boDig and MADE, for making these workshops and this residency possible.  It’s been great to develop the project with so much input, thought and enthusiasm from others.

Yesterday we spent quite a long time setting up the space (quite tricky- the Kinect can be a fiddly little blighter).  We then spent most of the session going through some quite rigorous exercises with Ghislaine Boddington.  These are exercises, or games (because they were fun too!) that she’s developed primarily in telepresence projects, some of them with myself.  They really help in getting used to working with and through the camera, and the relationships with space, screen and others the situation throws up.  In this instance they highlighted both the similarities and the differences between what we’re doing and video-based telepresence.  The crucial one is that the ‘real’ camera (the kinect) and the virtual one are totally independent, so the viewpoint shown by the video can be anywhere, entirely at odds with the physical placement of the Kinect.  This is both extremely exciting and rather challenging.

You’ll see from the videos below that we kept the virtual camera pretty much static for the first workshop.  Today, having established a strong orientation with real and virtual spaces, we were able to free things up a little bit.  Here are some of the things we explored:

1) we turned off the ‘tracers’ (the particles that trail behind the skeleton points) and just focused on the actual representation of the body in the system.  We found that it’s extremely different depending on distance – close to the camera, the body looks quite solid, and really quite detailed – you can actually recognise someone, and facial details, clothes etc. are quite deliniated.  As you get further away you become much more abstract, and the inaccuracies of the Kinect much more pronounced.  Yes, it’s obvious, but we found some very interesting results juxtaposing near and far, and playing with the rather distorted depth of field of the Kinect camera.

2) we tried out various combinations of tracers.  Overnight, Phill programmed in the capability to turn them on and off.  We found that less is definitely more, and the most interesting point we found to track was the central point of the spine.  This was really interesting to discover.  I now have an embrionic idea that we should represent the various tracers differently – some more prominently than others.  I like the idea that there may be a ‘trunk’ tracer (the spine, say), with little filaments branching off it to represent the others.

3) It’s really interesting the way people and things appear and disappear. The Kinect is surprisingly fussy about this.  As you edge into frame you won’t appear until there’s enough of you visible for the Kinect to recognise as a human form, at which point you’ll suddenly pop into existence.  The reverse can happen too, and relationships between people and objects can do strange things – kind of turning each other off and on.  It can be an interesting phenomenon, and also does strange and rather satisfying things to the sound (yes, we have sound now, although it needs a lot of refining) as lots of points of sound appear or disappear at once.

4) the virtual camera is both the most challenging and the most interesting thing.  It’s fascinating to look at the body and physical movement from unexpected viewpoints, but it can also be very confusing.  Over the course of today, I felt that Phill developed quite a skill as a ‘virtual cameraman’, choreographing the movement of the virtual camera expertly and artistically with the dancers.  This makes things look much more interesting – the videos above (today’s) look far more dynamic and three dimensional than yesterdays when the camera was largely static.

Many thanks to the dancers – Beliz, Banu, Dilek and Steven.  Also to Ghislaine, Phillipe, Phill and Yigit.  You’ve all made these couple of days fun, rewarding and extremely worthwhile.

We’ve had a little bit of time off over the weekend.  Istanbul has been busy, and fun.  There are not one but two (at least) major arts events going on – the Istanbul Biennale and ISEA (International Symposium on Electronic Art).  We went along the the opening of the Biennale on Friday night which was fun (thanks Aylin and Yigit), and we’ve also gone to a few of the ISEA exhibitions as well as having a couple of nice dinners with a few ISEA types.  Wish we could do more really, but need to remember why we’re here and keep working..

There was one piece in the exhibition at the Cumhuriet Art Gallery that seemed quite relevant to what we’re doing – Proposition 2.0 by Mark Cypher (left). I think I said something at some point about me and my shadow, being ‘literally’ a sandbox environment – well this really is literally a sandbox. There’s a suitcase full of sand to play with, which is then picked up by the kinect (above) and projected onto the flat surface on the right and the little sand planet on the left.

We’ll keep tinkering over the weekend, but I’m pretty pleased with where we’ve ended up in the first week here in Istanbul – actually it’s only really been four days.  You can see the latest on the visual front above – basically it’s the skeleton particle trails shown below combined with the 3D mesh display, somewhat improved from the earlier versions shown a few days ago.

And below you can listen to an early version of the soundscape (play it with the video if you like, though of course it won’t be synced)  – this is basically just the ‘singing sand-dune drones’ emitted by the traces/sculptures.  The soundbursts themselves aren’t sounding very nice yet – something for the weekend..

This is very much the first first first first version, and I’ve no idea if the final thing will look and sound anything like this at all,  but it’s a start I think, and definitely something we can start to work with with the dancers on Monday..