Today we’ve been working to make the 3D data from the Kinect a bit more ‘visible’. The Kinect depth-tracking is only really designed for one thing: to separate players from the background to allow for accurate skeleton tracking. So, to get a more convincingly 3D-looking ‘point cloud’ out of the thing requires a bit more work. This is a challenge a fair few people have tackled already, but Phill’s been building our own code to do it today.
Above, you can see the usual basic image sourced from the Kinect – it’s pretty much flat, although the colour-coding indicates depth: here that the top half of the body (and the ball) is closer to the camera.
The next phase has been to transform this outline into a polygonal mesh (made up of triangles). It’s 3D, but it doesn’t look it – this is partly because the (virtual) camera angle is pretty much ‘head on’, and there’s no calculation for perspective, so you don’t see the depth.
In the final image, additional geometry data has been effectively derived by calculating the normals of the polygon mesh – also the focal length and angle of the virtual camera have been changed to make the depth more apparent.