RobotVision is a library for techniques used on the intersection of robotics and vision. The main focus is visual monocular SLAM. It is written in C++ -- partially using object-oriented and template meta programming. Thus, most techniques can be easily adapted to other applications - e.g. range-and-bearing SLAM.

Hauke Strasdat; Steven Lovegrove; Andrew J. Davison;

Get the Source Code via SVN
svn co

Long Description
UPDATE: We made a new visual SLAM library availbe:

RobotVision (1.1) is out. Among other improvements, the bundle adjustment implementation is much faster now.

RobotVision is a library for techniques used on the intersection of robotics and vision. The current version (1.1) comprises bundle adjustment, feature initialisation pose-graph optimisation, and 2D/3D visualisation among other things.

The bundle adjustment class follows the classical approach - the first order sparseness structure is exploited using the Schur complement. However, compared to other straight-forward implementations, it has the following features: The second-order sparseness structure -- not all landmarks are visible in all frames - is exploited using sparse Cholesky factorisation (using the CSparse library). Optionally, the implementation supports the use of robust kernels in order to guard against spurious matches. Also, the implementation generalises over different transformations, landmarks and observations using template meta programming. Among others, SE3 pose transformation with 3D Euclidean points and 2D image observations are provided as a default implementation for monocular SLAM. Furthermore, the bundle adjustment class also contains an information filter for inverse depth feature points which can be used efficiently for feature initialisation within a keyframe-based monocular SLAM framework.

The pose-graph optimisation framework is using Levenberg-Marquardt, whereas the sparse Hessian is dealt with using sparse Cholesky factorisation (CSparse). Again, it generalises over different transformations. Apart from the standard rigid transformation SE3, it also supports 3D similarity transformations Sim3. In other words, it can also deal with scale drift which occurs in monocular SLAM.

Both, the bundle adjustment class as well as the pose-graph optimisation class uses Lie theory. Poses transformations (SE3, Sim3) are represented on a manifold/ as a Lie group while incremental updates are done in the tangent space around the identity/Lie algebra. In this way, we achieve a minimal representation during optimisation while ensuring that we are always far from singularities.

2D and 3D visualisation classes are convenient C++ wrappers around OpenGL.

Example Images

Simulations using bundle adjustment, pose-graph optimisation and feature initialisation

Input Data
RobotVision is primarily designed as a library, not as a standalone application. However, it comes with some demo applications.

Logfile Format
not yet supported

Type of Map
Feature maps and pose graphs

Hardware/Software Requirements
Cross-platform design, but only tested on Linux with GCC
OpenCV (optional)

Please refer to INSTALL.txt for a detailed installation instruction.

Papers Describing the Approach
Hauke Strasdat, J. M. M. Montiel, and Andrew J. Davison: Scale Drift-Aware Large Scale Monocular SLAM , Robotics: Science and Systems, 2010 (link)

License Information
This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
The authors allow the users of to use and modify the source code for their own research. Any commercial application, redistribution, etc has to be arranged between users and authors individually and is not covered by

RobotVision is licensed under the GNU Lesser General Public License version 3 (LGPLv3).

Further Information
If you have problems installing the software, any questions or any other comment, please do not hesitate to contact me:

Further Links
UPDATE: We made a new visual SLAM library availbe.

*** is not responsible for the content of this webpage ***
*** Copyright and V.i.S.d.P.: Hauke Strasdat; Steven Lovegrove; Andrew J. Davison; ***