10 research outputs found
NR-SLAM: Non-Rigid Monocular SLAM
In this paper we present NR-SLAM, a novel non-rigid monocular SLAM system
founded on the combination of a Dynamic Deformation Graph with a Visco-Elastic
deformation model. The former enables our system to represent the dynamics of
the deforming environment as the camera explores, while the later allows us to
model general deformations in a simple way. The presented system is able to
automatically initialize and extend a map modeled by a sparse point cloud in
deforming environments, that is refined with a sliding-window Deformable Bundle
Adjustment. This map serves as base for the estimation of the camera motion and
deformation and enables us to represent arbitrary surface topologies,
overcoming the limitations of previous methods. To assess the performance of
our system in challenging deforming scenarios, we evaluate it in several
representative medical datasets. In our experiments, NR-SLAM outperforms
previous deformable SLAM systems, achieving millimeter reconstruction accuracy
and bringing automated medical intervention closer. For the benefit of the
community, we make the source code public.Comment: 12 pages, 7 figures, submited to the IEEE Transactions on Robotics
(T-RO
Linear time vehicle relocation in SLAM
In this paper we propose an algorithm to determine the location of a vehicle in an environment represented by a stochastic map, given a set of environment measurements obtained by a sensor mounted on the vehicle. We show that the combined use of (1) geometric constraints considering feature correlation, (2) joint compatibility, (3) random sampling and (4) locality, make this algorithm linear with both the size of the stochastic map and the number of measurements. We demonstrate the practicality and robustness of our approach with experiments in an outdoor environment
Fusing Range and Intensity Images for Mobile Robot Localization
In this paper, we present the two-dimensional (2-D) version of the symmetries and perturbation model (SPmodel), a probabilistic representation model and an EKF integration mechanism for uncertain geometric information that is suitable for sensor fusion and integration in multisensor systems. We apply the SPmodel to the problem of location estimation in indoor mobile robotics, experimenting with the mobile robot MACROBE. We have chosen two types of complementary sensory information: 1) range images; 2) intensity images; obtained from a laser sensor. Results of these experiments show that fusing simple and computationally inexpensive sensory information can allow a mobile robot to precisely locate itself. They also demonstrate the generality of the proposed fusion and integration mechanism
Towards Robust Data Association and Feature Modeling for Concurrent Mapping and Localization
One of the most challenging aspects of concurrent mapping and localization (CML) is the problem of data association. Because of uncertainty in the origins of sensor measurements, it is difficult to determine the correspondence between measured data and features of the scene or object being observed, while rejecting spurious measurements. However, there are many important applications of mobile robots where maps need to be built of complex environments, consisting of composite features, from noisy sensor data. This paper reviews several new approaches to data association and feature modeling for CML that share the common theme of combining information from multiple uncertain vantage points while rejecting spurious data. Our results include: (1) feature-based mapping from laser data using robust segmentation, (2) map-building with sonar data using a novel application of the Hough transform for perception grouping, and (3) a new stochastic framework for making delayed decisions for combination of data from multiple uncertain vantage points. Experimental results are shown for CML using laser and sonar data from a B21 mobile robot