13,039 research outputs found
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Active SLAM for autonomous underwater exploration
Exploration of a complex underwater environment without an a priori map is beyond the state of the art for autonomous underwater vehicles (AUVs). Despite several efforts regarding simultaneous localization and mapping (SLAM) and view planning, there is no exploration framework, tailored to underwater vehicles, that faces exploration combining mapping, active localization, and view planning in a unified way. We propose an exploration framework, based on an active SLAM strategy, that combines three main elements: a view planner, an iterative closest point algorithm (ICP)-based pose-graph SLAM algorithm, and an action selection mechanism that makes use of the joint map and state entropy reduction. To demonstrate the benefits of the active SLAM strategy, several tests were conducted with the Girona 500 AUV, both in simulation and in the real world. The article shows how the proposed framework makes it possible to plan exploratory trajectories that keep the vehicle’s uncertainty bounded; thus, creating more consistent maps.Peer ReviewedPostprint (published version
Multimodal Hierarchical Dirichlet Process-based Active Perception
In this paper, we propose an active perception method for recognizing object
categories based on the multimodal hierarchical Dirichlet process (MHDP). The
MHDP enables a robot to form object categories using multimodal information,
e.g., visual, auditory, and haptic information, which can be observed by
performing actions on an object. However, performing many actions on a target
object requires a long time. In a real-time scenario, i.e., when the time is
limited, the robot has to determine the set of actions that is most effective
for recognizing a target object. We propose an MHDP-based active perception
method that uses the information gain (IG) maximization criterion and lazy
greedy algorithm. We show that the IG maximization criterion is optimal in the
sense that the criterion is equivalent to a minimization of the expected
Kullback--Leibler divergence between a final recognition state and the
recognition state after the next set of actions. However, a straightforward
calculation of IG is practically impossible. Therefore, we derive an efficient
Monte Carlo approximation method for IG by making use of a property of the
MHDP. We also show that the IG has submodular and non-decreasing properties as
a set function because of the structure of the graphical model of the MHDP.
Therefore, the IG maximization problem is reduced to a submodular maximization
problem. This means that greedy and lazy greedy algorithms are effective and
have a theoretical justification for their performance. We conducted an
experiment using an upper-torso humanoid robot and a second one using synthetic
data. The experimental results show that the method enables the robot to select
a set of actions that allow it to recognize target objects quickly and
accurately. The results support our theoretical outcomes.Comment: submitte
Learning to Prevent Monocular SLAM Failure using Reinforcement Learning
Monocular SLAM refers to using a single camera to estimate robot ego motion
while building a map of the environment. While Monocular SLAM is a well studied
problem, automating Monocular SLAM by integrating it with trajectory planning
frameworks is particularly challenging. This paper presents a novel formulation
based on Reinforcement Learning (RL) that generates fail safe trajectories
wherein the SLAM generated outputs do not deviate largely from their true
values. Quintessentially, the RL framework successfully learns the otherwise
complex relation between perceptual inputs and motor actions and uses this
knowledge to generate trajectories that do not cause failure of SLAM. We show
systematically in simulations how the quality of the SLAM dramatically improves
when trajectories are computed using RL. Our method scales effectively across
Monocular SLAM frameworks in both simulation and in real world experiments with
a mobile robot.Comment: Accepted at the 11th Indian Conference on Computer Vision, Graphics
and Image Processing (ICVGIP) 2018 More info can be found at the project page
at https://robotics.iiit.ac.in/people/vignesh.prasad/SLAMSafePlanner.html and
the supplementary video can be found at
https://www.youtube.com/watch?v=420QmM_Z8v
Active vision for dexterous grasping of novel objects
How should a robot direct active vision so as to ensure reliable grasping? We
answer this question for the case of dexterous grasping of unfamiliar objects.
By dexterous grasping we simply mean grasping by any hand with more than two
fingers, such that the robot has some choice about where to place each finger.
Such grasps typically fail in one of two ways, either unmodeled objects in the
scene cause collisions or object reconstruction is insufficient to ensure that
the grasp points provide a stable force closure. These problems can be solved
more easily if active sensing is guided by the anticipated actions. Our
approach has three stages. First, we take a single view and generate candidate
grasps from the resulting partial object reconstruction. Second, we drive the
active vision approach to maximise surface reconstruction quality around the
planned contact points. During this phase, the anticipated grasp is continually
refined. Third, we direct gaze to improve the safety of the planned reach to
grasp trajectory. We show, on a dexterous manipulator with a camera on the
wrist, that our approach (80.4% success rate) outperforms a randomised
algorithm (64.3% success rate).Comment: IROS 2016. Supplementary video: https://youtu.be/uBSOO6tMzw
- …