Archive

visual

We’re still working on the ‘new look’, and have made some progress – here’s a sample, incorporating at least some of the changes proposed yesterday. Strictly speaking we should probably call this the ‘London look’ – as we’re not really focusing on the aesthetics until the Mons residency in March. There’s a long way to go yet!

Although we’re supposed to be working purely on the technical/networking side of the project, we don’t seem to be able to resist tinkering with the aesthetics too.  We’re working on the ‘shadows’ – how the users leave traces in the space.  In Istanbul we had a live representation of the user, and particle trails left by the main points of the skeleton tracking. We’re now experimenting with something in between – ‘scultpures’, which are versions of the mesh left behind as the user moves (kind of like shedding a skin).

This is the first version, actually from late yesterday.  I don’t like it much, yet.  We’ve been discussing it today, and these are our notes as to how we want to evolve from here:

General:

‘sculptures’ and ‘trails’ need to seem like one and the same thing rather than two different entities.

Sculptures:

These dominate too much, especially those which are closest to the camera, meaning they completely obliterate the trails. Those which are further away look much better, which would lead me to believe that in the final (telepresence) scenario, the ‘others’ would look OK, but your own sculptures would block out everything else.

Also, the sculptures give no impression of movement – because they are captured at regular intervals, they give the same impression as a moving body photographed with a strobe light – ie with all semblance of movement removed.

Suggested solutions would be to make the sculptures more transparent, and to capture them in a different way – certainly less regularly. They could be sampled as to how much movement is going on at any particular time, or – best suggestion for now – ‘bursts’ of movement could be sampled which will give a better record of movement and make for more abstract shapes.

Trails:

These need a bit more ‘volume’. Replacing the particle image (currently just a dot) with an open circle will improve this, but we need to produce circles frequently enough that they never look like a series of circles (paper chain) but always like a transparent tube – kind of like an electron microscope image of a hair. It would also be good if the diameter of this could vary – perhaps in accordance to amount of movement again, or even randomly, but within constraints – ie with a ‘wobble’ rather than completely random.

We’ll keep tinkering over the weekend, but I’m pretty pleased with where we’ve ended up in the first week here in Istanbul – actually it’s only really been four days.  You can see the latest on the visual front above – basically it’s the skeleton particle trails shown below combined with the 3D mesh display, somewhat improved from the earlier versions shown a few days ago.

And below you can listen to an early version of the soundscape (play it with the video if you like, though of course it won’t be synced)  – this is basically just the ‘singing sand-dune drones’ emitted by the traces/sculptures.  The soundbursts themselves aren’t sounding very nice yet – something for the weekend..

This is very much the first first first first version, and I’ve no idea if the final thing will look and sound anything like this at all,  but it’s a start I think, and definitely something we can start to work with with the dancers on Monday..

 

We’ve had a real breakthrough today.  One of the most important aspects of me and my shadow for me is that players should actually be able to create, to shape, to sculpt the environment with their bodies and gestures.  Without this, it’s ‘just’ 3D telepresence (still a pretty exciting and new development, but not – I suspect – really unique to this project).  I might have downplayed this aspect when talking about it in the early days because I really had very little idea how it might function, or indeed what it might look and sound like.  But it’s really come together while we’re here, and for me that makes this residency totally worthwhile already.

For me, the idea really came together while thinking how the sound might function (see below).  But I really also have to give Phill a lot of credit for some great brainstorming on this, as well as super-quick coding. His key idea was to combine the two things that the kinect can do – it can give you a reasonably realistic 3D ‘mesh’ of what it’s seeing (ie the shape of a person’s body), and it can give you a ‘skeleton’ – basically a set of points for the key joints of the body, from which a ‘stick-man’ model can be derived.  Nobody much seems to be combining the two (though please feel free to correct me on that), but the combination seems to have a great deal of potential.

What you can see in the video here (apologies for quality, we’re just filming off a laptop screen) is a trail of glowing points – particles – left by each of the key skeletal points of the body (as mentioned, this is the exact same principal as I’m using with the sound).  These leave perfect 3D trails of movement (think ‘nude descending a staircase’, sort of) which will be an ideal starting point for sculptural forms.  The particles are smaller/less intense the faster you move – this might seem counter-intuitive, but it really works.  If you walk quickly through the space you’ll hardly make anything happen,  but if you stay still or move slowly the particles will slowly coalesce around you.


Today we’ve been working to make the 3D data from the Kinect a bit more ‘visible’. The Kinect depth-tracking is only really designed for one thing: to separate players from the background to allow for accurate skeleton tracking. So, to get a more convincingly 3D-looking ‘point cloud’ out of the thing requires a bit more work.  This is a challenge a fair few people have tackled already, but Phill’s been building our own code to do it today.

 

 

Above, you can see the usual basic image sourced from the Kinect – it’s pretty much flat, although the colour-coding indicates depth: here that the top half of the body (and the ball) is closer to the camera.

The next phase has been to transform this outline into a polygonal mesh (made up of triangles).  It’s 3D, but it doesn’t look it – this is partly because the (virtual) camera angle is pretty much ‘head on’, and there’s no calculation for perspective, so you don’t see the depth.

In the final image, additional geometry data has been effectively derived by calculating the normals of the polygon mesh – also the focal length and angle of the virtual camera have been changed to make the depth more apparent.