home
Tumblr:Readywater
Readywater.ca
Facebook:readywater
Twitter:readywater
Flickr:readywater
Vimeo:readywater
Friday, February 18th
2:37pm

2 notes



tags: nyc. monome. vj.

Galapagoose playing at the NYC basement party I was VJing.  Incredible, incredible sets by everyone, it was unreal.  First time I’d ever been to New York too.  I picked up a book on Nam June Paik, cool stuff~

Thursday, January 27th
7:18pm

14 notes



tags: cv. opencv. computer vision.

Some early computer vision work.  Soooo good.  You have to watch this.

Yeah… Moore’s law.  The techniques really haven’t changed for a lot of things too much.

Sunday, January 23rd
2:59am



tags: bruce sterling. vimeo.

If you watch nothing else, watch this.

Was happy to hear his mentions and descriptions of Manovich and Reas’ work, but incredible how he brings them together, especially the mention of Reality Surfing.  Jeez.

Tuesday, December 28th
11:04pm

2 notes



tags: kinect. processing. internet. fun. luft.

99 Red Balloons, a little project I’m working on in an attempt to better understand the toxiclibs library and some of the “virtual reality” ideas around interaction design I want to explore with the Kinect. Very, very early step in this regard. Also, it’s for a net art party my friend is throwing. Anyway, enjoy.

Monday, December 20th
10:01am



tags: Kinect. kinect.

Not certain this video tag will work (is facebook origin).

A quick hello world for something I’m working on re: kinect.  ”Interaction points” which’ll allow for quickly setting up virtual worlds that can be manipulated by virtualized people.  Can be easily attached to other objects, locations, physics systems, etc.  First step in a bigggeerrrr project, which’ll be parred down in to a workshop in Jan/Feb.

Friday, December 17th
4:10pm

(via quietbabylon)

46 notes

mini. Quiet Babylon: Interrobang-ON‽

This is a great post.  Give it a read.

quietbabylon:

Friends!

Computers are PRETTY NEAT

Everyone uses a computer nowadays. I bet you are using one right now! Computers are great. They can solve hard math problems, calculate artillery firing tables, and show a picture of a cat. Amazing!

But there is a CATCH!

When he was inventing the computer,…

Sunday, December 5th
5:38pm



tags: kinect. emf. mit. media lab.

There’s been some fantastic projects with the Kincet lately, but this is probably one of the most interesting implementations I’ve seen.  Kinect’s 3d positioning is used in tandem with an EMF sensor to track and identify the electromagnetic forces around an object in the Kinect’s view, and this data is then rendered and overlaid with the image.

The ready availability of the Kinect’s 3D sensor gives an incredibly accessible method for “digitizing” the real world, to put a somewhat captain power spin on it.  Augmented reality applications try desperately to do this via the “lens” metaphor in mobile devices, but suffer from lack of accuracy and lack of relevance.  

In this case, the digital is simply permitting an understanding of a physical phenomena that we’ve not the tools to immediately view, shifting the focus from augmentation to assistance.  Anyway, really impressed, thanks to Peter Horvath for posting.

As for my own kinect “studies”, I have it working fairly well.  My current challenge is rendering the point cloud accurately as a mesh and interpreting movements and intersections in some meaningful way.  I’m making use of Toxilibs to handle the really complex stuff spatially, and am trying to use convex hulls to make the transformation mesh wise.  Needless to say, a lot of this weekend has been spent staring at Javadocs.

Friday, December 3rd
1:10pm



tags: kinect. processing. opencv.
What will become endless nights of coding away for my new Kinect have begun.
I’ve been wanting one since hearing about a cheap, accessible 3D camera, so it was about time.  Having been a long time user of the PS3 eye, it’s a natural first step.  Installing the OSX drivers was a fairly painless task, though first time I’d encountered cmake.
The ever brilliant Daniel Shiffman has begun working on a set of processing libraries around the open kinect drivers, which so far capture depth and image data.  After some initial struggling, I was able to get a good framerate at 640x480 for a point cloud and associate colour mapping to the points. I literally cannot wait to start at it with the openCV libraries, though whether I can stick with Processing in doing so is questionable.

All told, it was fun to hack around with the stuff last night.  I’m looking forward to using it as a means of exploring gesture based interactions and specifically, some of the classic notions of “virtual reality,” as you can see in that image of me holding the world in my hand, created in only a few hours after getting the Kinect itself.  
Whether we’ll be seeing a snow crash like “Street” is another matter, and I was really taken with how disorienting the act of “grasping” that sphere was.  As these previously locked away technologies become more accessible, we’re bound to see some absolutely incredible stuff emerge from it simply being available.  But if my struggling to grasp that orb is any indication, we’ve got a very, very long way to go.

What will become endless nights of coding away for my new Kinect have begun.

I’ve been wanting one since hearing about a cheap, accessible 3D camera, so it was about time.  Having been a long time user of the PS3 eye, it’s a natural first step.  Installing the OSX drivers was a fairly painless task, though first time I’d encountered cmake.

The ever brilliant Daniel Shiffman has begun working on a set of processing libraries around the open kinect drivers, which so far capture depth and image data.  After some initial struggling, I was able to get a good framerate at 640x480 for a point cloud and associate colour mapping to the points. I literally cannot wait to start at it with the openCV libraries, though whether I can stick with Processing in doing so is questionable.

All told, it was fun to hack around with the stuff last night.  I’m looking forward to using it as a means of exploring gesture based interactions and specifically, some of the classic notions of “virtual reality,” as you can see in that image of me holding the world in my hand, created in only a few hours after getting the Kinect itself.  

Whether we’ll be seeing a snow crash like “Street” is another matter, and I was really taken with how disorienting the act of “grasping” that sphere was.  As these previously locked away technologies become more accessible, we’re bound to see some absolutely incredible stuff emerge from it simply being available.  But if my struggling to grasp that orb is any indication, we’ve got a very, very long way to go.

Tuesday, November 30th
9:49am

2 notes



tags: gender. diaspora. analytics. statistics. webforms.
Diaspora and Gender

Gender is frequently treated as a binary or “N/A” option in online forms with the purpose of collecting demographic information, analytics being the life blood of many online applications.
Freed from this need to collect information, the open social network Diaspora turns Gender in to a text field, freeing us to put whatever we want.  Which begs the question: how accurate can these binary or opt out analytics be at identifying trends and behavior when they don’t account for something as fundamental and varied as gender?  What else are we missing?

Friday, November 26th
12:11pm

(via arielwaldman)

14 notes



tags: kinect. ar.
The Kinect as a means of altering ones visual environment is definitely a novel use for the device, and it’s incredible to see it explored by a talented artist.  As computer vision speeds its panoptic advance in to public space, our environment will become filled with visual aids to computer vision techniques, allowing the categorization and sorting of the real world into addressable, identifiable objects.  What began with the bar code or the scan card will become a sparkling world of lights and sounds just beyond our range of perception, but that we can still detect.
I’m reminded of the ever-present sparkle of nano-technological mites described in the world of Diamond Age by Neal Stephenson.  Will the lack of these things seem foreign to us in a few years time, like a natural environment with no hum of electricity?
arielwaldman:

“With these images I was exploring the unique photographic possibilities  presented by using a Microsoft Kinect as a light source. The Kinect - an  inexpensive videogame peripheral - projects a pattern of infrared dots  known as “structured light”. Invisible to the eye, this pattern can be  captured using an infrared camera.”

The Kinect as a means of altering ones visual environment is definitely a novel use for the device, and it’s incredible to see it explored by a talented artist.  As computer vision speeds its panoptic advance in to public space, our environment will become filled with visual aids to computer vision techniques, allowing the categorization and sorting of the real world into addressable, identifiable objects.  What began with the bar code or the scan card will become a sparkling world of lights and sounds just beyond our range of perception, but that we can still detect.

I’m reminded of the ever-present sparkle of nano-technological mites described in the world of Diamond Age by Neal Stephenson.  Will the lack of these things seem foreign to us in a few years time, like a natural environment with no hum of electricity?

arielwaldman:

“With these images I was exploring the unique photographic possibilities presented by using a Microsoft Kinect as a light source. The Kinect - an inexpensive videogame peripheral - projects a pattern of infrared dots known as “structured light”. Invisible to the eye, this pattern can be captured using an infrared camera.”

Thursday, November 25th
The Rapid Prototyping of Interaction Design

Yesterday I gave a short talk on the Rapid prototyping of Interaction Design for a profession development event at the University of Toronto’s Knowledge Media Design Institute.  These guys are doing some incredibly interesting work in Toronto right now, so it was a surprise and an honour to have been invited to speak.  I did a presentation chatting briefly about how I got in to Interaction Design from graduating as a political science specialist last year, and five principles I’ve come to apply in my use of Rapid Prototyping as a design practice.

You can access the contents of my lecture at http://readywater.ca/kmdi and the slides themselves are accessible from http://readywater.ca/kmdi/slides.pdf

Sunday, November 21st
Knowledge Media Design Professional Development Event

I’m giving a short lil’ talk this Wednesday about rapid prototyping as an Interaction Designer, and what I’ve learned in the past year and a half since graduating with a Political Science specialist degree from the University of Toronto.

It’s exciting to be able to share what I’ve been learning with others entering or about to enter similar fields, and to learn from those more established (I’m by farrrrr the most junior person there).  Should be a fun event, and many thanks to Margaret for inviting me to speak!

Now… to finish slides.

Friday, November 19th
Bending the Lines of Sight

MIT is working on developing the theoretical and practical foundation for machine vision beyond the line of sight context.  In short, cameras that can see around goddamn corners.

Thursday, November 18th
1:23pm



tags: streetview. geolocation.
The 9 Eyes of Google Streetview

I learned of this project last year on ArtFagCity, but somehow didn’t know about the blog.  Happy browsing, Google Streetview is one of the most interesting tools for understanding space and place today, at least in the online world.  An understanding and appreciation for what it reveals in our society is important, I think.

Tuesday, November 16th
2:47pm



tags: glitch. architecture.
Physical Glitch Architecture

These images (link thanks to by Wes Hodgson) are of pre-fabricated concrete dwellings which, by some error in the process, ended up completely different from what we imagined.

As we move closer and closer to a world of rapid fabrication, I can’t help but wonder how our appreciation for flaws and error will continue to evolve.

Themed by Kiyla, powered by Tumblr.