OpenGL Invades the Real World

Augmented reality systems are beginning to look pretty good these days. The videos below show some recent results from an ISMAR paper by Georg Klein. The graphics shown are inserted directly into the live video stream, so that you can play with them as you wave the camera around. To do this, the system needs to know where the camera is, so that it can render the graphics with the right size and position. Figuring out the camera motion by tracking features in the video turns out to be not that easy, and people have been working on it for years. As you can see below, the current crop of solutions are pretty solid, and run at framerate too. More details on Georg’s website.

You need to a flashplayer enabled browser to view this YouTube video

You need to a flashplayer enabled browser to view this YouTube video

Back in 2005, Andy Davison’s original augmented reality system got me excited enough that I decided to do a PhD. The robustness of these systems has improved a lot since then, to the point where they’re a fairly short step from making good AR games possible. In fact, there are a few other cool computer-vision based game demos floating around the lab at the moment. It’s easy to see this starting a new gaming niche. Basic vision-based games have been around for a while, but the new systems really are a shift in gear.

There are still some problems to be ironed out – current systems don’t deal with occlusion at all, for example. You can see some other issues in the video involving moving objects and repetitive texture. Still, it looks like they’re beginning to work well enough to start migrating out of the lab. First applications will definitely be of the camera-and-screen variety. Head-mounted display style systems are still some way off; the reason being that decent displays just don’t seem to exist right now.

(For people who wonder what this has to do with robotics – the methods used for tracking the environment here are basically identical to those used for robot navigation over larger scales.)

Citation: Parallel Tracking and Mapping for Small AR Workspaces“, Georg Klein and David Murray, ISMAR 2007.

Big Dog on Ice

Boston Dynamics just released a new video of Big Dog, their very impressive walking robot. This time it tackles snow, ice and jumping, as well as its old party trick of recovering after being kicked. Apparently it can carry 150 Kg too. This is an extremely impressive demo – it seems light-years ahead of other walking robot’s I’ve seen.

You need to a flashplayer enabled browser to view this YouTube video

I must admit to having almost no idea how the robot works. Apparently it uses joint sensors, foot pressure, gyroscope and stereo vision. Judging from the speed of the reactions, I doubt vision plays much of a role. It looks like the control is purely reactive – the robot internally generates a simple gait (ignoring the environment), and then responds to disturbances to try and keep itself stable. While they’ve obviously got a pretty awesome controller, even passive mechanical systems can be surprisingly stable with good design – have a look at this self-stabilizing bicycle.

The one part of the video where it looks like the control isn’t purely reactive is the sped-up sequence towards the end where it climbs over building rubble. There it does seem to be choosing its foot placement. I would guess they’re just beginning to integrate some vision information. Unsurprisingly, walking with planning is currently much slower than “walking by moving your legs”.

Either way, I guess DARPA will be suitably impressed.

Update: More details on how the robot works here.