More from ISRR

ISRR finished today. It’s been a good conference, low on detailed technical content, but high on interaction and good for an overview of parts of robotics I rarely get to see.

One of the highlights of the last two days was a demo from Japanese robotics legend Shigeo Hirose, who put on a show with his ACM R5 swimming snake robot in the hotel’s pool. Like many Japanese robots, it’s remote controlled rather than autonomous, but it’s a marvellous piece of mechanical design. Also on show was a hybrid roller-walker robot and some videos of a massive seven-ton climbing robot for highway construction.

You need to a flashplayer enabled browser to view this YouTube video

Another very interesting talk with some neat visual results was given by Shree Nayar, on understanding illumination in photographs. If you take a picture of a scene, the light that reaches the camera can be thought of as having two components – direct and global. The “direct light” leaves the light source and arrives at the camera via a single reflection off the object. The “global light” takes more complicated paths, for example via multiple reflections, subsurface scatter, volumetric scatter, etc. What Nayar showed was that by controlling the illumination, it’s possible to separate the direct and global components of the lighting. Actually, this turns out to be almost embarrassingly simple to do – and it produces some very interesting results. Some shown below, and many more here. It’s striking how much the direct-only photographs look like renderings from simple computer graphics systems like OpenGL. Most of the reason early computer graphics looked unrealistic was due to the difficulty of modelling the global illumination component. The full paper is here.

Scene Direct Global

Lots of other great technical talks too, but obviously I’m biased towards posting about the ones with pretty pictures!

Citation: “Visual Chatter in the Real World”, S. Nayar et. al., ISRR 2007

ISRR Highlights – Day 1

I’m currently in Hiroshima, Japan at ISRR. It’s been a good conference so far, with lots of high quality talks. I’m also enjoying the wonderful Japanese food (though fish for breakfast is a little strange).

One of the most interesting talks from Day 1 was about designing a skin-like touch sensor. The design is ingeniously simple, consisting of a layer of urethane foam with some embedded LEDs and photodiodes. The light from the LED scatters into the foam and is detected by the photodiode. When the foam is deformed by pressure, the amount of light reaching the photodiode changes. By arranging an array of these sensing sites under a large sheet of foam, you get a skin-like large-area pressure sensor. The design is simple, cheap, and appears to be quite effective.

Principle of the Sensor

Having a decent touch sensor like this is important. People rely on their sense of touch much more than they realize – one of the presenters demonstrated this by showing some videos of people trying to perform simple mechanical tasks with anaesthetised sensory neurons (they weren’t doing well). Walking robots weren’t getting very far until people realized the importance of having pressure sensors in the soles of the feet.

The authors were able to show some impressive new abilities with a humanoid robot using their sensor. Unfortunately I can’t find their videos online, but the below figure shows a few frames of the robot picking up a 30KG load. Using its touch sensor the robot can steady itself against the table, which helps with stability.

Touching the washing

I get the impression that the sensor is limited by the thickness of the foam – too thick to use on fingers for example. It’s also a long way from matching the abilities of human skin, which has much higher resolution and sensitivity to other stimuli like heat, etc. Still, it’s a neat technology!

Update: Here’s another image of the robot using it’s touch sensor to help with a roll-and-rise manoeuvre. There’s a video over at BotJunkie.

Citation:Whole body haptics for augmented humanoid task capabilities“, Yasuo Kuniyoshi, Yoshiyuki Ohmura, and Akihiko Nagakubo, International Symposium on Robotics Research 2007.

Off to ISRR

For the next week I’ll be in Japan attending the International Symposium of Robotics Research. Should be lots of fun, and a good time to find out about all the new developments outside of my little corner of the robot research universe. Come say hello if you’re at the conference.

Urban Challenge Winners Announced

1st Place – Tartan Racing (Carnegie Mellon)

2nd Place – Stanford Racing Team

3rd Place – Victor Tango (Virginia Tech)

That’s all the info on the web at the moment. More details should be available soon. Check Wired or TGDaily.

Update:
The details are out (video, photos, more photos). The biggest surprise was that final ordering came down to time alone; no team was penalized for violating the rules of the road (I wonder if this can be correct – the webcast showed Victor Tango mounting the kerb at one point). On adjusted time, Tartan was about 20 minutes ahead of Stanford, with Victor Tango another 20 minutes behind. MIT placed fourth.

DARPA director Tony Tether seemed to state quite strongly that this will be the final Grand Challenge. I’d wait and see on that front. I seem to remember similar statements after the 2005 Challenge. It’s possible the event will continue, but not under DARPA. What exactly the subject of a future challenge would be is not obvious. There’s still a lot of work to be done to build reliable autonomous vehicles, but many of the basics have now been covered. The Register reports that Red Whittaker is proposing an endurance event to test performance over a longer period with variable weather conditions. I think maybe a more interesting challenge would be to raise the bar on sensing issues. Right now the teams are heavily reliant on pre-constructed digital maps and GPS. In principle, there’s no reason they couldn’t run without GPS using only a normal road-map, but taking the crutches away would force the teams to deal with some tough issues. It’s a significant step up, but no worse than the jump between the 2005 Challenge and yesterday.

Whatever DARPA decides to do, I hope they don’t make the mistake of walking away from this prematurely. The Grand Challenges have built up a big community of researchers around autonomous vehicles. They’re also priceless PR for science and engineering in general. I think the teams are resourceful enough to find funding for themselves, but without the crucial ingredient of a public challenge to work toward, things may lose momentum. The next time a politician frets about the low uptake of science courses, I hope someone suggests funding another Grand Challenge.

Six Robots Cross the Line

The Urban Challenge is over – six of the eleven finalists completed the course. Stanford, Tartan Racing and Victor Tango all finished within a few minutes of each other, just before DARPA’s 6-hour time limit. Ben Franklin, MIT and Cornell also finished, but it looks like they were outside the time limit. The DARPA judges have now got to collate all the data about how well the robots obeyed the rules of the road, and will announce a winner tomorrow morning. It’s going to be very close. From watching the webcast, it looks like either Stanford or Tartan Racing will take the top spot, but making the call between them will be very hard. Both put in almost flawless performances.

Junior Finishes

Six hours of urban driving, without any human intervention, is quite a remarkable feat. In fact, watching the best cars, I quickly forgot that they weren’t being driven by humans. That’s really quite amazing. I can’t think of any higher praise for the achievement of the competitors.

There were some thrills and spills along the way – TerraMax rammed a building, TeamUCF wandered into a garden, and MIT and Cornell had a minor collision. Once the weaker bots were eliminated though, everything went remarkably smoothly. The last four or five hours passed almost without event. MIT clearly had some trouble, randomly stopping and going very slowly on the off-road sections (looks like their sensor thresholds were set too low), but they’re a first time entry, so getting to the finish line at all is a major achievement.

DARPA put on a extremely professional event. In an interview after the finish, DARPA director Tony Tether said he didn’t expect to run another Challenge. It will be interesting to see where autonomous driving goes from here. The top teams have clearly made huge progress, but the technology is still a long way from the point where you’d let it drive the kids to school. Many things about the challenge were simplified – there were no pedestrians to avoid, and the vehicles all had precise pre-constructed digital maps of the race area, specifying things like where the stop lines were. Putting this technology to use in the real world is still some distance away, but much closer than anyone would have imagined five years ago.

Spot the Sensor

Watching all the videos and photography coming out of the Urban Challenge, the thing I pay most attention to are the racks of sensors sitting on top of the robots. The sensor choices the teams have made are quite different to the previous 2005 Grand Challenge. There has been a big move to more sophisticated laser systems, and some notable absences in the use of cameras. Here’s a short sensor-spotter’s guide:

Lasers

For almost all the competitors, the primary sensor is the time-of-flight lidar, often called simply a laser scanner. These are very popular in robotics, because they provide accurate distances to obstacles with higher robustness and less complexity than alternatives such as stereo vision. Some models to look out for:

SICK

SICK Lidar

Used by 26 of the 36 semi-finalists, these blue boxes are ubiquitous in robotics research labs around the world because they’re the cheapest decent lidar available and are more than accurate enough for most applications. Typically they are operated with a maximum range of 25m, with distances accurate to a few centimetres . They’re 2D scanners, so they only see a slice through the world. This is normally fine for dealing with obstacles like walls or trees which extend vertically from the ground, but can land you in trouble for overhanging obstacles that aren’t at the same height as the laser. In the previous Challenge, these were the primary laser sensors for many teams. This time around they seem to be mostly relegated to providing some extra sensing in blind-spots.
SICK scanners have a list price of around $6,000, but there is a low-price deal for Grand Challenge entries. Indeed, the SICK corporation has had so much business and publicity from the Grand Challenge, that this year they decided to enter a team of their own.

Velodyne

Velodyne Lidar

New kid on the block for the Urban Challenge, the Velodyne scanner is conspicuously popular this year. It’s used by 12 of the 36 semi-finalists, including most of the top teams. With a list price of $75,000, the Velodyne is quite a bit more pricey than the common SICK. However, instead of just containing a single laser, the Velodyne has a fan of 64 lasers, giving a genuine 3D picture of the surroundings.

There’s an interesting story behind the Velodyne sensor. Up until two years ago Velodyne was a company that made subwoofers. It’s founders decided to enter the 2005 Grand Challenge as a hobby project. Back then, the SICK scanner was about the best available, but it didn’t provide enough data, so many teams were loading up their vehicles with racks of SICKs. Team DAD instead produced a custom laser scanner that was a big improvement on what was available at the time. Their website illustrates the change quite nicely. For the Urban Challenge, they decided to concentrate on selling their new scanner to other teams instead of entering themselves. I’m sure this is exactly the kind of ecosystem of technology companies DARPA dreams about creating with these challenges.

I understand that the Velodyne data is a bit nosier than a typical SICK because of cross-talk between the lasers, but it’s obviously more than good enough to do the job. These sensors produce an absolute flood of data – more than a million points a second – and dealing with that is driving a lot of teams’ computing requirements.

Teams who couldn’t afford the hefty price tag of this sensor have improvised Velodyne-like scanners by putting SICKs on turntables or pan-tilt units, but the SICK wasn’t designed for applications like this, so the data is quite sparse and it’s tricky to synchronize the laser data with the pan-tilt position.

Riegl

Riegl Lidar

Some of the more well-funded competitors are using these high-end lidar systems from Riegl. These are 2D scanners similar to the SICK, but have longer range and more sophisticated processing to deal with confusing multiple returns. However, they will set you back a hefty $28,000.

Ibeo
Ibeo is a subsidiary of SICK that makes sensors for the automotive market. They produce several models of laser scanner, such as the flying-saucer like attachments seen here on the front of team CarOLO. I’m not too familiar with these sensors, but I believe they are rotating laser fans – something like a scaled-down Velodyne.

Vision

Vision is less prevalent this year than I was expecting. As far as I can gather, none of the teams have gone in for a computer-vision based approach to recognising other cars. I suppose with a good laser sensor it’s mostly unnecessary, plus you have the advantage of being immune to illumination problems which can foil vision techniques. Many teams have cameras for detecting lane markings, but that appears to be the extent of it.
Some teams, such as Stanford’s Junior, are all-laser systems with no cameras at all. Given that vision was the core of the secret sauce that helped Stanford win the 2005 Grand Challenge, and their early press photos prominently showed a Ladybug2 camera, I was pretty surprised by this. The reason is revealed in this interview with Mike Montemerlo where he shows plenty of results using the Ladybug2, but explains that they had to abandon the sensor after their lead vision programmer left for a job with Google (who have some interest in Ladybugs). The final version of Junior uses laser reflectance information to find the road markings, and judging by the results so far, seems to be getting on just fine without vision.

Cameras come in all shapes and sizes, but a few to look out for:

PointGrey Bumblebee

Bumblebee Stereo Camera

Point Grey are a popular supplier of stereo vision systems, and you can see these cameras attached to a number of vehicles. Princeton’s Team Prowler has a system based entirely around these stereo cameras – a choice they made for budget reasons.

Ladybug2

Point Grey Ladybug2

This is a spherical vision system composed of 6 tightly packed cameras, also produced by Point Grey. After Stanford abandoned their vision system, I don’t think any entries are using this camera – but there’s one sitting on my desk as I type this, so I’m including the picture anyway.

Radar

Though not very visible, several cars are sporting radar units. The MIT vehicle has 16! Radar is good for long range, out to hundreds of meters, but it’s noisy and has poor resolution. However, when you’re travelling fast and just want to know if there’s a major obstacle up ahead, it does the job. It’s already used in several commercial automotive safety systems.

GPS

GPS is obviously a core sensor for every entrant. Most vehicles have several redundant GPS units on their roofs, popular suppliers being companies like Trimble who sell rugged, high-accuracy units developed for applications in precision agriculture.

IMU

Though not visible on the outside, many of the entrants have inertial measurement units tucked inside. These little packages of gyroscopes and accelerometers help the vehicles keep track of position during GPS outages. High-end IMUs can be amazingly precise, but have a price tag to match.

This post has become something of a beast. If you still can’t get enough of sensors, there are some interesting videos here and here where Virgina Tech and Ben Franklin discuss their sensor suites.

Urban Challenge Final

The final of the Urban Challenge is just about to begin. There’s a live video stream at http://www.grandchallenge.org. The commentators aren’t experts, but the footage is excellent. TGDaily, who have provided some of the best coverage during the week, are also live blogging the final and hopefully will have some more sensible commentary.

Once the robots roll out the gate, they’re going to be totally on their own for up to six hours. I hear some of the teams were making code changes right through last night, so what happens today is anybody’s guess. For lots of people out in Victorville, it’s going to be a very tense few hours.

Urban Challenge Finalists Announced

11 teams have been selected for the final of the Urban Challenge. The teams are:

Tartan Racing
Stanford Racing Team
MIT
Team Oshkosh Truck
Team Cornell
Victor Tango

CarOLO
Ben Franklin Racing Team
Team UCF
Team AnnieWay
Intelligent Vehicle Systems

DARPA originally planned to have 20 teams in the final, but decided that none of the other competitors met the minimum safety standards. In the words of DARPA directory Tony Tether – “It would be terrible for one bot to take out another”.

The final will be Webcast live at www.grandchallenge.org, starting at 7:30 a.m. PT (10:30 a.m. ET, 14:30pm GMT). In the meantime, favourites Tartan Racing have some nice videos on their race blog.