146,086 research outputs found
Learning to Prevent Monocular SLAM Failure using Reinforcement Learning
Monocular SLAM refers to using a single camera to estimate robot ego motion
while building a map of the environment. While Monocular SLAM is a well studied
problem, automating Monocular SLAM by integrating it with trajectory planning
frameworks is particularly challenging. This paper presents a novel formulation
based on Reinforcement Learning (RL) that generates fail safe trajectories
wherein the SLAM generated outputs do not deviate largely from their true
values. Quintessentially, the RL framework successfully learns the otherwise
complex relation between perceptual inputs and motor actions and uses this
knowledge to generate trajectories that do not cause failure of SLAM. We show
systematically in simulations how the quality of the SLAM dramatically improves
when trajectories are computed using RL. Our method scales effectively across
Monocular SLAM frameworks in both simulation and in real world experiments with
a mobile robot.Comment: Accepted at the 11th Indian Conference on Computer Vision, Graphics
and Image Processing (ICVGIP) 2018 More info can be found at the project page
at https://robotics.iiit.ac.in/people/vignesh.prasad/SLAMSafePlanner.html and
the supplementary video can be found at
https://www.youtube.com/watch?v=420QmM_Z8v
Keyframe-based monocular SLAM: design, survey, and future directions
Extensive research in the field of monocular SLAM for the past fifteen years
has yielded workable systems that found their way into various applications in
robotics and augmented reality. Although filter-based monocular SLAM systems
were common at some time, the more efficient keyframe-based solutions are
becoming the de facto methodology for building a monocular SLAM system. The
objective of this paper is threefold: first, the paper serves as a guideline
for people seeking to design their own monocular SLAM according to specific
environmental constraints. Second, it presents a survey that covers the various
keyframe-based monocular SLAM systems in the literature, detailing the
components of their implementation, and critically assessing the specific
strategies made in each proposed solution. Third, the paper provides insight
into the direction of future research in this field, to address the major
limitations still facing monocular SLAM; namely, in the issues of illumination
changes, initialization, highly dynamic motion, poorly textured scenes,
repetitive textures, map maintenance, and failure recovery
DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments
Simultaneous Localization and Mapping (SLAM) is considered to be a
fundamental capability for intelligent mobile robots. Over the past decades,
many impressed SLAM systems have been developed and achieved good performance
under certain circumstances. However, some problems are still not well solved,
for example, how to tackle the moving objects in the dynamic environments, how
to make the robots truly understand the surroundings and accomplish advanced
tasks. In this paper, a robust semantic visual SLAM towards dynamic
environments named DS-SLAM is proposed. Five threads run in parallel in
DS-SLAM: tracking, semantic segmentation, local mapping, loop closing, and
dense semantic map creation. DS-SLAM combines semantic segmentation network
with moving consistency check method to reduce the impact of dynamic objects,
and thus the localization accuracy is highly improved in dynamic environments.
Meanwhile, a dense semantic octo-tree map is produced, which could be employed
for high-level tasks. We conduct experiments both on TUM RGB-D dataset and in
the real-world environment. The results demonstrate the absolute trajectory
accuracy in DS-SLAM can be improved by one order of magnitude compared with
ORB-SLAM2. It is one of the state-of-the-art SLAM systems in high-dynamic
environments. Now the code is available at our github:
https://github.com/ivipsourcecode/DS-SLAMComment: 7 pages, accepted at the 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2018). Now the code is available at our
github: https://github.com/ivipsourcecode/DS-SLA
ORB-SLAM: a Versatile and Accurate Monocular SLAM System
This paper presents ORB-SLAM, a feature-based monocular SLAM system that
operates in real time, in small and large, indoor and outdoor environments. The
system is robust to severe motion clutter, allows wide baseline loop closing
and relocalization, and includes full automatic initialization. Building on
excellent algorithms of recent years, we designed from scratch a novel system
that uses the same features for all SLAM tasks: tracking, mapping,
relocalization, and loop closing. A survival of the fittest strategy that
selects the points and keyframes of the reconstruction leads to excellent
robustness and generates a compact and trackable map that only grows if the
scene content changes, allowing lifelong operation. We present an exhaustive
evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves
unprecedented performance with respect to other state-of-the-art monocular SLAM
approaches. For the benefit of the community, we make the source code public.Comment: 17 pages. 13 figures. IEEE Transactions on Robotics, 2015. Project
webpage (videos, code): http://webdiis.unizar.es/~raulmur/orbslam
Benchmarking and Comparing Popular Visual SLAM Algorithms
This paper contains the performance analysis and benchmarking of two popular
visual SLAM Algorithms: RGBD-SLAM and RTABMap. The dataset used for the
analysis is the TUM RGBD Dataset from the Computer Vision Group at TUM. The
dataset selected has a large set of image sequences from a Microsoft Kinect
RGB-D sensor with highly accurate and time-synchronized ground truth poses from
a motion capture system. The test sequences selected depict a variety of
problems and camera motions faced by Simultaneous Localization and Mapping
(SLAM) algorithms for the purpose of testing the robustness of the algorithms
in different situations. The evaluation metrics used for the comparison are
Absolute Trajectory Error (ATE) and Relative Pose Error (RPE). The analysis
involves comparing the Root Mean Square Error (RMSE) of the two metrics and the
processing time for each algorithm. This paper serves as an important aid in
the selection of SLAM algorithm for different scenes and camera motions. The
analysis helps to realize the limitations of both SLAM methods. This paper also
points out some underlying flaws in the used evaluation metrics.Comment: 7 pages, 4 figure
- …
