13 research outputs found
Learning to Segment Dynamic Objects using SLAM Outliers
We present a method to automatically learn to segment dynamic objects using
SLAM outliers. It requires only one monocular sequence per dynamic object for
training and consists in localizing dynamic objects using SLAM outliers,
creating their masks, and using these masks to train a semantic segmentation
network. We integrate the trained network in ORB-SLAM 2 and LDSO. At runtime we
remove features on dynamic objects, making the SLAM unaffected by them. We also
propose a new stereo dataset and new metrics to evaluate SLAM robustness. Our
dataset includes consensus inversions, i.e., situations where the SLAM uses
more features on dynamic objects that on the static background. Consensus
inversions are challenging for SLAM as they may cause major SLAM failures. Our
approach performs better than the State-of-the-Art on the TUM RGB-D dataset in
monocular mode and on our dataset in both monocular and stereo modes.Comment: Accepted to ICPR 202
Metric Monocular Localization Using Signed Distance Fields
Metric localization plays a critical role in vision-based navigation. For
overcoming the degradation of matching photometry under appearance changes,
recent research resorted to introducing geometry constraints of the prior scene
structure. In this paper, we present a metric localization method for the
monocular camera, using the Signed Distance Field (SDF) as a global map
representation. Leveraging the volumetric distance information from SDFs, we
aim to relax the assumption of an accurate structure from the local Bundle
Adjustment (BA) in previous methods. By tightly coupling the distance factor
with temporal visual constraints, our system corrects the odometry drift and
jointly optimizes global camera poses with the local structure. We validate the
proposed approach on both indoor and outdoor public datasets. Compared to the
state-of-the-art methods, it achieves a comparable performance with a minimal
sensor configuration.Comment: Accepted to 2019 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS
Fast, Robust, Accurate, Multi-Body Motion Aware SLAM
Simultaneous ego localization and surrounding object motion awareness are significant issues for the navigation capability of unmanned systems and virtual-real interaction applications. Robust and accurate data association at object and feature levels is one of the key factors in solving this problem. However, currently available solutions ignore the complementarity among different cues in the front-end object association and the negative effects of poorly tracked features on the back-end optimization. It makes them not robust enough in practical applications. Motivated by these observations, we make up rigid environment as a unified whole to assist state decoupling by integrating high-level semantic information, ultimately enabling simultaneous multi-states estimation. A filter-based multi-cues fusion object tracker is proposed for establishing more stable object-level data association. Combined with the object’s motion priors, the motion-aided feature tracking algorithm is proposed to improve the feature-level data association performance. Furthermore, a novel state estimation factor graph is designed which integrates a specific feature observation uncertainty model and the intrinsic priors of tracked object, and solved through sliding-window optimization. Our system is evaluated using the KITTI dataset and achieves comparable performance to state-of-the-art object pose estimation systems both quantitatively and qualitatively. We have also validated our system on simulation environment and a real-world dataset to confirm the potential application value in different practical scenarios
A Comprehensive Review on Autonomous Navigation
The field of autonomous mobile robots has undergone dramatic advancements
over the past decades. Despite achieving important milestones, several
challenges are yet to be addressed. Aggregating the achievements of the robotic
community as survey papers is vital to keep the track of current
state-of-the-art and the challenges that must be tackled in the future. This
paper tries to provide a comprehensive review of autonomous mobile robots
covering topics such as sensor types, mobile robot platforms, simulation tools,
path planning and following, sensor fusion methods, obstacle avoidance, and
SLAM. The urge to present a survey paper is twofold. First, autonomous
navigation field evolves fast so writing survey papers regularly is crucial to
keep the research community well-aware of the current status of this field.
Second, deep learning methods have revolutionized many fields including
autonomous navigation. Therefore, it is necessary to give an appropriate
treatment of the role of deep learning in autonomous navigation as well which
is covered in this paper. Future works and research gaps will also be
discussed