FAB-MAP in the News

Today’s edition of the New Scientist news feed includes an article about my PhD research. How nice! They called the article ‘Chaos filter stops robots getting lost’. This is kind of  a bizarre title – ‘chaos filter’ seems to be a term of their own invention :).  Still, they mostly got things mostly right. I guess that’s journalism!

Whatever about the strange terminology, it’s great to see the research getting out there. It’s also nice to see the feedback from Robert Sim, who made a rather impressive vision-only robotic system with full autonomy a few years ago, still quite a rare accomplishment.

For anyone interested in the details of the system, have a look at my publications page. New Scientist’s description more or less resembles how our system works, but many of the specifics are a little wide of the mark. In particular, we’re not doing hierarchical clustering of visual words as the article describes – instead we learn a Bayesian network that captures the visual word co-occurrence statistics. This achieves a similar effect in that we implicitly learn about objects in the world, but with none of the hard decisions and awkward parameter tuning involved in clustering.

The Really Big Picture

I was at a lunch talk today by Nick Bostrom, of Oxford’s Future of Humanity Institute. The institute has an unusual mandate to consider the really big picture: human extinction risks, truly disruptive technologies such as cognitive enhancement, life extension and brain emulation, and other issues too large for most people to take seriously. It was a pleasure to hear someone thinking clearly and precisely, in the manner of a good philosopher, about topics that are usually the preserve of crackpots. Prof Bostrom’s website is a treasure trove of papers. An atypical but perhaps robot-relevant example is the Whole Brain Emulation Roadmap.