Invited Speakers

Dietrich Paulus, Universität Koblenz-Landau (http://www.uni-koblenz.de/~paulus)

Three-dimensional maps and terrain classification

 

Abstract

One of the essential tasks for an autonomous robot is to determine its position and to explore its local environment. These two tasks are today solved simultaneously in the so-called SLAM algorithm (simultaneous localization and mapping). While this algorithm was developped initially for two-dimensional laser scans, it was now extended to 3d as three-dimensional lasers are now available. State of the art SLAM systems use a graph to align scanes. They are typically divided into the so-called front- and backend. Laser scans are aligned locally in the frontend, constructing a graph of the measurements. This graph is optimized in the backend in order to find a maximally consistent configuration of the nodes.

The well-known ICP algorithm is often used to align pairs of laser scans. This algorithm generally requires an approximate initial guess in order to determine the correct relative transformation. If odometry cannot be used or is not accurate, this guess has to be derived from sensor data. We describe a new featureless method, as well as a feature-based approach and compare the results. These methods allow for place recognition as well as loop closing when building maps. Vision, laser, odometry and GPS are fused in a unified framework.

When a robot has to navigate in an unstructured, unknown environment, the terrain has to be analyzed and maps have to be enriched by drivability information. We describe how such terrain maps are derived from various sensor data using a probabilistic method. As in the mapping stage, we fuse vision data with range data and additional sensors, if available. All methods described operate in realtime.

 

Biography

Dietrich Paulus obtained a Bachelor degree in Computer Science from University of Western Ontario, London, Ontario, Canada, followed by a diploma (Dipl.-Inf.) in Computer Science and a PhD (Dr.-Ing.) from Friedrich-Alexander University Erlangen-Nuremberg, Germany.

He worked as a senior researcher at the chair for pattern recognition (Prof. Dr. H. Niemann) at Erlangen University from 1991-2002. He obtained his habilitation in Erlangen in 2001. Since 2001 he is at the institute for computational visualistics at the University Koblenz-Landau, Germany where he became a full professor in 2002. From 2004-2008 he was the dean of the department of computer science at the University Koblenz-Landau. Since 2012 he is head of the computing center in Koblenz.

His primary research interest are active computer vision, object recognition, color image processing, medical image processing, vision-based autonomous systems, and software engineering for computer vision. He has published over 150 articles on these topics and he is the author of three textbooks. He is member of Gesellschaft für Informatik (GI) and IEEE.

 

 

Jakob Verbeek, INRIA (http://lear.inrialpes.fr/~verbeek)

The Fisher vector representation: principles and applications

 

Abstract

The Fisher vector (FV) representation is a state-of-the-art approach to aggregate local descriptors statistics into a global image or video representations. This presentation gives an overview of our work over the last few years on the principles and applications of this representation for various object and action recognition applications. First, we consider the basic principles, their application to represent images, and how to scale the representation to large datasets. Second, we consider how we can avoid the unrealistic i.i.d. assumption that underlies the basic model, and how this naturally leads to transformations of the FV that explain the effectiveness of the power-normalization. Third, we consider approximations to both the power and L2 normalization of the FV, which enable the use of integral image techniques to speed-up object and action localization techniques that evaluate many (spatial or temporal) windows in images or videos. Finally, we consider applications of the FV representation in fully supervised and weakly supervised object category localization systems.

 

Biography

Jakob Verbeek received a PhD degree in computer science in 2004 from the University of Amsterdam, The Netherlands. After being a postdoctoral researcher at the University of Amsterdam and at INRIA Rhone-Alpes, he has been a full-time researcher at INRIA, Grenoble, France, since 2007. His research interests include machine learning and computer vision, with special interest in applications of statistical models in computer vision.