4 research outputs found
Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty
This work proposes a robust visual odometry method for structured
environments that combines point features with line and plane segments,
extracted through an RGB-D camera. Noisy depth maps are processed by a
probabilistic depth fusion framework based on Mixtures of Gaussians to denoise
and derive the depth uncertainty, which is then propagated throughout the
visual odometry pipeline. Probabilistic 3D plane and line fitting solutions are
used to model the uncertainties of the feature parameters and pose is estimated
by combining the three types of primitives based on their uncertainties.
Performance evaluation on RGB-D sequences collected in this work and two public
RGB-D datasets: TUM and ICL-NUIM show the benefit of using the proposed depth
fusion framework and combining the three feature-types, particularly in scenes
with low-textured surfaces, dynamic objects and missing depth measurements.Comment: Major update: more results, depth filter released as opensource, 34
page
RGB-D Odometry and SLAM
The emergence of modern RGB-D sensors had a significant impact in many
application fields, including robotics, augmented reality (AR) and 3D scanning.
They are low-cost, low-power and low-size alternatives to traditional range
sensors such as LiDAR. Moreover, unlike RGB cameras, RGB-D sensors provide the
additional depth information that removes the need of frame-by-frame
triangulation for 3D scene reconstruction. These merits have made them very
popular in mobile robotics and AR, where it is of great interest to estimate
ego-motion and 3D scene structure. Such spatial understanding can enable robots
to navigate autonomously without collisions and allow users to insert virtual
entities consistent with the image stream. In this chapter, we review common
formulations of odometry and Simultaneous Localization and Mapping (known by
its acronym SLAM) using RGB-D stream input. The two topics are closely related,
as the former aims to track the incremental camera motion with respect to a
local map of the scene, and the latter to jointly estimate the camera
trajectory and the global map with consistency. In both cases, the standard
approaches minimize a cost function using nonlinear optimization techniques.
This chapter consists of three main parts: In the first part, we introduce the
basic concept of odometry and SLAM and motivate the use of RGB-D sensors. We
also give mathematical preliminaries relevant to most odometry and SLAM
algorithms. In the second part, we detail the three main components of SLAM
systems: camera pose tracking, scene mapping and loop closing. For each
component, we describe different approaches proposed in the literature. In the
final part, we provide a brief discussion on advanced research topics with the
references to the state-of-the-art.Comment: This is the pre-submission version of the manuscript that was later
edited and published as a chapter in RGB-D Image Analysis and Processin