720 research outputs found
Learning to Segment Dynamic Objects using SLAM Outliers
We present a method to automatically learn to segment dynamic objects using
SLAM outliers. It requires only one monocular sequence per dynamic object for
training and consists in localizing dynamic objects using SLAM outliers,
creating their masks, and using these masks to train a semantic segmentation
network. We integrate the trained network in ORB-SLAM 2 and LDSO. At runtime we
remove features on dynamic objects, making the SLAM unaffected by them. We also
propose a new stereo dataset and new metrics to evaluate SLAM robustness. Our
dataset includes consensus inversions, i.e., situations where the SLAM uses
more features on dynamic objects that on the static background. Consensus
inversions are challenging for SLAM as they may cause major SLAM failures. Our
approach performs better than the State-of-the-Art on the TUM RGB-D dataset in
monocular mode and on our dataset in both monocular and stereo modes.Comment: Accepted to ICPR 202
SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation from RGB-D Camera Motion
Active depth cameras suffer from several limitations, which cause incomplete
and noisy depth maps, and may consequently affect the performance of RGB-D
Odometry. To address this issue, this paper presents a visual odometry method
based on point and line features that leverages both measurements from a depth
sensor and depth estimates from camera motion. Depth estimates are generated
continuously by a probabilistic depth estimation framework for both types of
features to compensate for the lack of depth measurements and inaccurate
feature depth associations. The framework models explicitly the uncertainty of
triangulating depth from both point and line observations to validate and
obtain precise estimates. Furthermore, depth measurements are exploited by
propagating them through a depth map registration module and using a
frame-to-frame motion estimation method that considers 3D-to-2D and 2D-to-3D
reprojection errors, independently. Results on RGB-D sequences captured on
large indoor and outdoor scenes, where depth sensor limitations are critical,
show that the combination of depth measurements and estimates through our
approach is able to overcome the absence and inaccuracy of depth measurements.Comment: IROS 201
SLAM: Decentralized and Distributed Collaborative Visual-inertial SLAM System for Aerial Swarm
In recent years, aerial swarm technology has developed rapidly. In order to
accomplish a fully autonomous aerial swarm, a key technology is decentralized
and distributed collaborative SLAM (CSLAM) for aerial swarms, which estimates
the relative pose and the consistent global trajectories. In this paper, we
propose SLAM: a decentralized and distributed () collaborative SLAM
algorithm. This algorithm has high local accuracy and global consistency, and
the distributed architecture allows it to scale up. SLAM covers swarm
state estimation in two scenarios: near-field state estimation for high
real-time accuracy at close range and far-field state estimation for globally
consistent trajectories estimation at the long-range between UAVs. Distributed
optimization algorithms are adopted as the backend to achieve the goal.
SLAM is robust to transient loss of communication, network delays, and
other factors. Thanks to the flexible architecture, SLAM has the potential
of applying in various scenarios
A hardware accelerator for ORB-SLAM
Simultaneous Localization And Mapping (SLAM) is a key component of self-driving cars. We study ORB-SLAM, a SLAM state-of-the-art solution, and develop a hardware accelerator for a critical part of it: ORB feature extraction. The accelerator achieve 8x speedup and 2000x energy consumption reduction
- …