4 research outputs found
Human-in-the-Loop SLAM
Building large-scale, globally consistent maps is a challenging problem, made
more difficult in environments with limited access, sparse features, or when
using data collected by novice users. For such scenarios, where
state-of-the-art mapping algorithms produce globally inconsistent maps, we
introduce a systematic approach to incorporating sparse human corrections,
which we term Human-in-the-Loop Simultaneous Localization and Mapping
(HitL-SLAM). Given an initial factor graph for pose graph SLAM, HitL-SLAM
accepts approximate, potentially erroneous, and rank-deficient human input,
infers the intended correction via expectation maximization (EM),
back-propagates the extracted corrections over the pose graph, and finally
jointly optimizes the factor graph including the human inputs as human
correction factor terms, to yield globally consistent large-scale maps. We thus
contribute an EM formulation for inferring potentially rank-deficient human
corrections to mapping, and human correction factor extensions to the factor
graphs for pose graph SLAM that result in a principled approach to joint
optimization of the pose graph while simultaneously accounting for multiple
forms of human correction. We present empirical results showing the
effectiveness of HitL-SLAM at generating globally accurate and consistent maps
even when given poor initial estimates of the map.Comment: AAAI 201
Recommended from our members
Multi-SLAM Systems for Fault-Tolerant Simultaneous Localization and Mapping
Mobile robots need accurate, high fidelity models of their operating environments in order to complete their tasks safely and efficiently. Generating these models is most often done via Simultaneous Localization and Mapping (SLAM), a paradigm where the robot alternatively estimates the most up-to-date model of the environment and its position relative to this model as it acquires new information from its sensors over time. Because robots operate in many different environments with different compute, memory, sensing, and form constraints, the nature and quality of information available to individual instances of different SLAM systems varies substantially. `One-size-fits-all\u27 solutions are thus exceedingly difficult to engineer, and highly specialized systems, which represent the state-of-the-art for most types of deployments, are not robust to operating conditions in which their assumptions are not met. This thesis seeks to investigate an alternative approach to these robustness and universality problems by incorporating existing SLAM solutions within a larger framework supported by planning and learning. The central idea is to combine learned models that estimate SLAM algorithm performance under a variety of sensory conditions, in this case neural networks, with planners designed for planning under uncertainty and partial observability, in this case partially observable Markov decision problems (POMDPs). Models of existing SLAM algorithms can be learned, and these models can then be used online to estimate the performance of a range of solutions to the SLAM problem at hand. The POMDP policy then selects the appropriate algorithm, given the estimated performance, cost of switching methods, and other information. This general approach may also be applicable to many other robotics problems that rely on data-fusion, such as grasp planning, motion planning, or object identification