17 research outputs found
Direct Monocular Odometry Using Points and Lines
Most visual odometry algorithm for a monocular camera focuses on points,
either by feature matching, or direct alignment of pixel intensity, while
ignoring a common but important geometry entity: edges. In this paper, we
propose an odometry algorithm that combines points and edges to benefit from
the advantages of both direct and feature based methods. It works better in
texture-less environments and is also more robust to lighting changes and fast
motion by increasing the convergence basin. We maintain a depth map for the
keyframe then in the tracking part, the camera pose is recovered by minimizing
both the photometric error and geometric error to the matched edge in a
probabilistic framework. In the mapping part, edge is used to speed up and
increase stereo matching accuracy. On various public datasets, our algorithm
achieves better or comparable performance than state-of-the-art monocular
odometry methods. In some challenging texture-less environments, our algorithm
reduces the state estimation error over 50%.Comment: ICRA 201
High-Performance and Tunable Stereo Reconstruction
Traditional stereo algorithms have focused their efforts on reconstruction
quality and have largely avoided prioritizing for run time performance. Robots,
on the other hand, require quick maneuverability and effective computation to
observe its immediate environment and perform tasks within it. In this work, we
propose a high-performance and tunable stereo disparity estimation method, with
a peak frame-rate of 120Hz (VGA resolution, on a single CPU-thread), that can
potentially enable robots to quickly reconstruct their immediate surroundings
and maneuver at high-speeds. Our key contribution is a disparity estimation
algorithm that iteratively approximates the scene depth via a piece-wise planar
mesh from stereo imagery, with a fast depth validation step for semi-dense
reconstruction. The mesh is initially seeded with sparsely matched keypoints,
and is recursively tessellated and refined as needed (via a resampling stage),
to provide the desired stereo disparity accuracy. The inherent simplicity and
speed of our approach, with the ability to tune it to a desired reconstruction
quality and runtime performance makes it a compelling solution for applications
in high-speed vehicles.Comment: Accepted to International Conference on Robotics and Automation
(ICRA) 2016; 8 pages, 5 figure
Methodology to analyze the accuracy of 3D objects reconstructed with collaborative robot based monocular LSD-SLAM
SLAM systems are mainly applied for robot navigation while research on
feasibility for motion planning with SLAM for tasks like bin-picking, is
scarce. Accurate 3D reconstruction of objects and environments is important for
planning motion and computing optimal gripper pose to grasp objects. In this
work, we propose the methods to analyze the accuracy of a 3D environment
reconstructed using a LSD-SLAM system with a monocular camera mounted onto the
gripper of a collaborative robot. We discuss and propose a solution to the pose
space conversion problem. Finally, we present several criteria to analyze the
3D reconstruction accuracy. These could be used as guidelines to improve the
accuracy of 3D reconstructions with monocular LSD-SLAM and other SLAM based
solutions.Comment: 5 pages, 5 figures, 2018 International Conference on Intelligent
Autonomous Systems (ICoIAS 2018
DPPTAM: Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence
This paper proposes a direct monocular SLAM algorithm that estimates a dense reconstruction of a scene in real-time on a CPU. Highly textured image areas are mapped using standard direct mapping techniques [1], that minimize the photometric error across different views. We make the assumption that homogeneous-color regions belong to approximately planar areas. Our contribution is a new algorithm for the estimation of such planar areas, based on the information of a superpixel segmentation and the semidense map from highly textured areas.
We compare our approach against several alternatives using the public TUM dataset [2] and additional live experiments with a hand-held camera. We demonstrate that our proposal for piecewise planar monocular SLAM is faster, more accurate and more robust than the piecewise planar baseline [3]. In addition, our experimental results show how the depth regularization of monocular maps can damage its accuracy, being the piecewise planar assumption a reasonable option in indoor scenarios
Loosely-Coupled Semi-Direct Monocular SLAM
We propose a novel semi-direct approach for monocular simultaneous
localization and mapping (SLAM) that combines the complementary strengths of
direct and feature-based methods. The proposed pipeline loosely couples direct
odometry and feature-based SLAM to perform three levels of parallel
optimizations: (1) photometric bundle adjustment (BA) that jointly optimizes
the local structure and motion, (2) geometric BA that refines keyframe poses
and associated feature map points, and (3) pose graph optimization to achieve
global map consistency in the presence of loop closures. This is achieved in
real-time by limiting the feature-based operations to marginalized keyframes
from the direct odometry module. Exhaustive evaluation on two benchmark
datasets demonstrates that our system outperforms the state-of-the-art
monocular odometry and SLAM systems in terms of overall accuracy and
robustness.Comment: Accepted for publication in IEEE Robotics and Automation Letters.
Watch video demo at: https://youtu.be/j7WnU7ZpZ8