I’ve evolved the sound a bit more over the last few days. It’s the same basic idea and material, but with a little more variety in terms of the layers I’m using and the way they’re controlled. I feel like this is a bit of a balancing act here – it needs to be quite approachable I think, given the sheer variety of people who are likely to engage with the final piece. But it also needs to be distinctive and characterful – I don’t want to play to the lowest common denominator. It also needs to be interactive, but I want it to have a convincing musical flow to it. It needs to be a result of the users’ movement, but also encourage people to move (in some way be ‘good to dance to’). Finally it needs to be sparse enough that cause and effect are clear (I’m hoping the sound might function as an integral part of the navigation and general usability), but complex and varied enough to be interesting. Tricky..

I don’t think I should develop this much more before we reconnect the sound and image (and interactivity). Hopefully we’ll do this over the next couple of days and start to form an overall audiovisual aesthetic and a new model for interaction. We should hopefully be building the first portal during this time, so it’s an exciting few days we have ahead!

I’ve completely re-thought the sound over the last couple of days. I didn’t develop it at all in London as we focused on the core logistics of getting four portals working, so as the piece started to take shape the sound didn’t really keep up.

One of the key things we developed in London was the way in which users can be creative in the space, and we ended up with two paradigms – ‘shadows’ (we called them ‘sculptures’ in London, but shadows makes more sense given the title of the piece) and ‘traces’ (‘trails’ in London, but I think traces sounds better). The former is long term, and full-body – regular imprints users leave in the space. The latter is short-term, and involves the particles left behind by the key points of the body (which we first developed in Istanbul).

Only the latter has been sonified so far, making a bit of a disconnect between sound and image. I’ve also realised that the shadows potentially have more audio, and audiovisual potential. If the shadows get left at regular intervals (but different intervals for each user), and these have corresponding audio events, then some nice polyrhythms can be produced. These would be more sonically interesting that the signification of the particle traces, which are so numerous that they tend to produce overly-dense sounds, which just blend into sonic soup. They could also provide cues to the users as to when the shadows will be created, and – being more clearly identifiable, could even help with the navigation.

This sound is a first attempt at what this might sound like at this point. Of course, it’s slightly meaningless without the visual element (so far!), but it’s an attempt to ‘compose the soundtrack’. The high ringing sounds represent the shadows, on regular cycles corresponding to a 4:5:6:7 ratio. The tinkly/rustly (those are technical terms) represent the traces. The low pulse and short high rhythmic sounds are global, and are a byproduct of the 4:5:6:7 polyrhythm. They’d be spatially attached to the light at the centre of the space I think, or possibly ubiquitous throughout the space.

Phill has been walking around with furrowed brow today. He’s been spending a lot of time scribbling diagrams; staring at boxes, and chairs, and boxes on top of chairs. I’m a bit worried about him.

What’s been vexing him is the particularly knarly problem of combining data from several Kinects. A few people have used multiple Kinects, but I haven’t seen much work where the Kinects are actually pointed at the same thing (or box, or chair, or person). The problem is that many points will actually be seen by both (eventually all three) cameras, so combining the two sets of data requires a kind of 3D jigsaw-puzzle thinking that hurts.  For the full technical detail on this see Phill’s own blog.

On a more satisfying note – I think we’ve cracked the problem we had with the navigation in London. This proved so tricky and sensitive that even highly trained dancers had trouble with it. I’m happy to say that we’ve made a lot of progress on this over the last couple of days – it’s now much smoother and ‘steadier’ somehow, and can be easily adjusted to calibrate the sensitivity. We won’t know for sure until next week when we try it out with a few more people, but I’m hopeful it’s sorted now.

So.. having said we’d focus on the aesthetics this time, we do seem to have spent a bit of time here in Mons on technical issues. They’re kind of big though, and do impact on the aesthetics to a certain extent. It seemed crazy not to deal with them while we had a bit of time. In the meantime I have been making great strides with the sound – more v. soon.

This marks the beginning of the third MADE me and my shadow residency in Mons, Belgium – hosted by Transcultures.  Phill and I arrived last night, and this morning Lucie from Transcultures took us to where we’ll be working for the next couple of weeks.  We’re in the architecture department at the University of Mons, working in the Salle du Bélian exhibition space, an old chapel.  We’ve had an amazing variety of places to work on the project – quite inspiring!

For this residency, we’ll be focusing primarily on the aesthetics of me and my shadow, both sonic and visual.  We’ll also be building the first actual portal installation, which I’m very excited about.  I’ll keep you posted…

This is a final round-up on the last few days at the National Theatre Studio – these were VERY intense for all of us, and left very little time for blogging. I did however manage to get quite a lot of documentation, which I’ve been sifting through and editing over the weekend and put up here. Here are a few little notes and explanations:

Videos (above, be sure to watch these ones on full screen!) – the first one was made on Thursday, and is probably my best attempt to capture the project so far. We used two cameras – each capturing two portals, so hopefully you can really see the interaction between all four (give it a bit of time..). The angles aren’t quite right as you can see, but this was as near as I could get it. The sound is a direct feed from the computer, so is the best representation of where we’re at with that. It’s correctly panned to that the sound for each portal comes from more-or-less the right place. The interaction between sound and movement seems to be clearest towards the end of this clip.

The second video uses the two cameras but with just two portals (and sound just from the camera mics). It’s a bit more of a rush-job as it was made just before the process showing on Friday – sorry for chopping your heads off, Nick and Sasha! I wanted to include it though because it shows a number of refinements made overnight between Thursday and Friday. The main change is that we finally have a horizon – might seem like a small thing, but it was very important for me, and makes orientation in the space much easier.

The third video (a bit out of focus – sorry!) shows an overview ‘fly through’ of the space, so you can see the scale and shape of it (we plan to have something like this on display on the outsides of the portals, and also hopefully online). You can see how the horizon works – it’s basically a circular space that fades to black towards the edges. This is a very rough approximation of my idea for next stage of the project, where there will be lighting, with a very bright light in the centre of the space. You’ll know where you are because the further you get from the centre the darker it will get.

The last video shows an inverted version of the image – again this gives and approximate idea – this time of what the final world might look like in the darkness of the edges.

Photos (below) are all of the last few days, including the process showing on Friday and dinner afterwards (you can see just how many people we’d ‘snowballed’ to by the end of the residency!). Most of these are from our photoshoot on Thursday. I hasten to add that these are not the actual professional photographs, from Jean-Paul Berthoin (can’t wait to see those!), but just a few that I quickly snapped as we went along.

We were very happy with the way the process showing went on Friday – it all went off without a hitch, I felt it had a real buzz about it, and we got some really great and useful feedback. Many thanks to everyone at body>data>space and the National Theatre Studio and beyond who helped to make that happen (sorry not to list names here, but you’re all in the ‘People’ section).

Next step Mons at the end of March, where we’ll be focusing on the aesthetics of the piece, making it look (and especially sound) beautiful. I expect there’ll be developments in the meantime, and be sure I’ll post them here..

This slideshow requires JavaScript.

I didn’t get a chance to record anything or blog anything yesterday, but we’ve been making great progress. This video is a pretty good summary of where we’ve got to. This is the closest I can get to catching three portals together – it’s a nice wide-angle lens, but still the angles of the dividers between them mean that there’s no perfect angle to capture everything.

Hopefully you can see the interaction between the portals though, and get an impression of the sound as it develops – this is just from the camera mic, so it’s a bit rough.

Many thanks to Sasha, Nick and Amina (left to right in the video) – you’ve been very patient as we get everything working, helping us to really test and push the system, and making some great material!

Bit of a milestone this.. we’ve got four portals working together for the first time! Actually, I’ll be honest – three portals. No idea why the fourth didn’t work today, but we’ll get there. Still really exciting, anyway. Here you can see Sasha in ‘Paris’ – her ‘shadows’ are red, Nick’s (in ‘London’, next door) are purple, and Amina’s (in ‘Istanbul’ next to that) are pale blue. We ironed out some serious kinks to do with the scale of the space and the navigation today; there’s still one major issue in that you can only see the shadows from the other portals, and not a live stream of those users, but I feel this is a huge step forward..

One of the biggest challenges facing us in this project has been how the user might navigate around the virtual space.  Today, with a bit of input from Laura Kriefman and Matthew Bickerton (thanks guys..), we cracked it!  Here’s Nick demonstrating it.  Basically, we’ve turned the whole space into a virtual joystick.  There’s a central spot – if your spine is in line with that there’s no movement, while moving any part of the spine (ie leaning or stepping) away from this point will move in that direction – the further away from the point, the faster you will move; and (this is the clever part) twisting the shoulders will rotate your otientation.  It’s perhaps a little sensitive at this stage (we can adjust that), but I think it really works!

We’ve had quite a bit of input from various people over the last few days, with various people passing through to see us at the studios and others in contact virtually. That’s Nick di Vita, Laura Kriefman, Matthew Bickerton and me, Nick Rothwell in the middle… OK, the bottom is something I found lying around in the studios today – great to see the level of set-building the National Theatre are working at these days 😉