I recently gave a talk at an SEGD workshop on interactive technologies and wayfinding. It was a lot of fun, and I’ve posted the slides and a kinda rough transcription on my site.
There’s a disqus thread at the bottom of the page, and I’d love any feedback or critiques you’d be willing to give, especially as I’m hoping to rewrite and improve on it for a later presentation. Thanks!
99 Red Balloons, a little project I’m working on in an attempt to better understand the toxiclibs library and some of the “virtual reality” ideas around interaction design I want to explore with the Kinect. Very, very early step in this regard. Also, it’s for a net art party my friend is throwing. Anyway, enjoy.
Not certain this video tag will work (is facebook origin).
A quick hello world for something I’m working on re: kinect. ”Interaction points” which’ll allow for quickly setting up virtual worlds that can be manipulated by virtualized people. Can be easily attached to other objects, locations, physics systems, etc. First step in a bigggeerrrr project, which’ll be parred down in to a workshop in Jan/Feb.
There’s been some fantastic projects with the Kincet lately, but this is probably one of the most interesting implementations I’ve seen. Kinect’s 3d positioning is used in tandem with an EMF sensor to track and identify the electromagnetic forces around an object in the Kinect’s view, and this data is then rendered and overlaid with the image.
The ready availability of the Kinect’s 3D sensor gives an incredibly accessible method for “digitizing” the real world, to put a somewhat captain power spin on it. Augmented reality applications try desperately to do this via the “lens” metaphor in mobile devices, but suffer from lack of accuracy and lack of relevance.
In this case, the digital is simply permitting an understanding of a physical phenomena that we’ve not the tools to immediately view, shifting the focus from augmentation to assistance. Anyway, really impressed, thanks to Peter Horvath for posting.
As for my own kinect “studies”, I have it working fairly well. My current challenge is rendering the point cloud accurately as a mesh and interpreting movements and intersections in some meaningful way. I’m making use of Toxilibs to handle the really complex stuff spatially, and am trying to use convex hulls to make the transformation mesh wise. Needless to say, a lot of this weekend has been spent staring at Javadocs.
What will become endless nights of coding away for my new Kinect have begun.
I’ve been wanting one since hearing about a cheap, accessible 3D camera, so it was about time. Having been a long time user of the PS3 eye, it’s a natural first step. Installing the OSX drivers was a fairly painless task, though first time I’d encountered cmake.
The ever brilliant Daniel Shiffman has begun working on a set of processing libraries around the open kinect drivers, which so far capture depth and image data. After some initial struggling, I was able to get a good framerate at 640x480 for a point cloud and associate colour mapping to the points. I literally cannot wait to start at it with the openCV libraries, though whether I can stick with Processing in doing so is questionable.
All told, it was fun to hack around with the stuff last night. I’m looking forward to using it as a means of exploring gesture based interactions and specifically, some of the classic notions of “virtual reality,” as you can see in that image of me holding the world in my hand, created in only a few hours after getting the Kinect itself.
Whether we’ll be seeing a snow crash like “Street” is another matter, and I was really taken with how disorienting the act of “grasping” that sphere was. As these previously locked away technologies become more accessible, we’re bound to see some absolutely incredible stuff emerge from it simply being available. But if my struggling to grasp that orb is any indication, we’ve got a very, very long way to go.
The Kinect as a means of altering ones visual environment is definitely a novel use for the device, and it’s incredible to see it explored by a talented artist. As computer vision speeds its panoptic advance in to public space, our environment will become filled with visual aids to computer vision techniques, allowing the categorization and sorting of the real world into addressable, identifiable objects. What began with the bar code or the scan card will become a sparkling world of lights and sounds just beyond our range of perception, but that we can still detect.
I’m reminded of the ever-present sparkle of nano-technological mites described in the world of Diamond Age by Neal Stephenson. Will the lack of these things seem foreign to us in a few years time, like a natural environment with no hum of electricity?
“With these images I was exploring the unique photographic possibilities presented by using a Microsoft Kinect as a light source. The Kinect - an inexpensive videogame peripheral - projects a pattern of infrared dots known as “structured light”. Invisible to the eye, this pattern can be captured using an infrared camera.”