3,436 research outputs found
Novel insights into the impact of graph structure on SLAM
© 2014 IEEE. SLAM can be viewed as an estimation problem over graphs. It is well known that the topology of each dataset affects the quality of the corresponding optimal estimate. In this paper we present a formal analysis of the impact of graph structure on the reliability of the maximum likelihood estimator. In particular, we show that the number of spanning trees in the graph is closely related to the D-optimality criterion in experimental design. We also reveal that in a special class of linear-Gaussian estimation problems over graphs, the algebraic connectivity is related to the E-optimality design criterion. Furthermore, we explain how the average node degree of the graph is related to the ratio between the minimum of negative log-likelihood achievable and its value at the ground truth. These novel insights give us a deeper understanding of the SLAM problem. Finally we discuss two important applications of our analysis in active measurement selection and graph pruning. The results obtained from simulations and experiments on real data confirm our theoretical findings
Modeling Perceptual Aliasing in SLAM via Discrete-Continuous Graphical Models
Perceptual aliasing is one of the main causes of failure for Simultaneous
Localization and Mapping (SLAM) systems operating in the wild. Perceptual
aliasing is the phenomenon where different places generate a similar visual
(or, in general, perceptual) footprint. This causes spurious measurements to be
fed to the SLAM estimator, which typically results in incorrect localization
and mapping results. The problem is exacerbated by the fact that those outliers
are highly correlated, in the sense that perceptual aliasing creates a large
number of mutually-consistent outliers. Another issue stems from the fact that
most state-of-the-art techniques rely on a given trajectory guess (e.g., from
odometry) to discern between inliers and outliers and this makes the resulting
pipeline brittle, since the accumulation of error may result in incorrect
choices and recovery from failures is far from trivial. This work provides a
unified framework to model perceptual aliasing in SLAM and provides practical
algorithms that can cope with outliers without relying on any initial guess. We
present two main contributions. The first is a Discrete-Continuous Graphical
Model (DC-GM) for SLAM: the continuous portion of the DC-GM captures the
standard SLAM problem, while the discrete portion describes the selection of
the outliers and models their correlation. The second contribution is a
semidefinite relaxation to perform inference in the DC-GM that returns
estimates with provable sub-optimality guarantees. Experimental results on
standard benchmarking datasets show that the proposed technique compares
favorably with state-of-the-art methods while not relying on an initial guess
for optimization.Comment: 13 pages, 14 figures, 1 tabl
Complexity Analysis and Efficient Measurement Selection Primitives for High-Rate Graph SLAM
Sparsity has been widely recognized as crucial for efficient optimization in
graph-based SLAM. Because the sparsity and structure of the SLAM graph reflect
the set of incorporated measurements, many methods for sparsification have been
proposed in hopes of reducing computation. These methods often focus narrowly
on reducing edge count without regard for structure at a global level. Such
structurally-naive techniques can fail to produce significant computational
savings, even after aggressive pruning. In contrast, simple heuristics such as
measurement decimation and keyframing are known empirically to produce
significant computation reductions. To demonstrate why, we propose a
quantitative metric called elimination complexity (EC) that bridges the
existing analytic gap between graph structure and computation. EC quantifies
the complexity of the primary computational bottleneck: the factorization step
of a Gauss-Newton iteration. Using this metric, we show rigorously that
decimation and keyframing impose favorable global structures and therefore
achieve computation reductions on the order of and , respectively,
where is the pruning rate. We additionally present numerical results
showing EC provides a good approximation of computation in both batch and
incremental (iSAM2) optimization and demonstrate that pruning methods promoting
globally-efficient structure outperform those that do not.Comment: Pre-print accepted to ICRA 201
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Lagrangian Duality in 3D SLAM: Verification Techniques and Optimal Solutions
State-of-the-art techniques for simultaneous localization and mapping (SLAM)
employ iterative nonlinear optimization methods to compute an estimate for
robot poses. While these techniques often work well in practice, they do not
provide guarantees on the quality of the estimate. This paper shows that
Lagrangian duality is a powerful tool to assess the quality of a given
candidate solution. Our contribution is threefold. First, we discuss a revised
formulation of the SLAM inference problem. We show that this formulation is
probabilistically grounded and has the advantage of leading to an optimization
problem with quadratic objective. The second contribution is the derivation of
the corresponding Lagrangian dual problem. The SLAM dual problem is a (convex)
semidefinite program, which can be solved reliably and globally by
off-the-shelf solvers. The third contribution is to discuss the relation
between the original SLAM problem and its dual. We show that from the dual
problem, one can evaluate the quality (i.e., the suboptimality gap) of a
candidate SLAM solution, and ultimately provide a certificate of optimality.
Moreover, when the duality gap is zero, one can compute a guaranteed optimal
SLAM solution from the dual problem, circumventing non-convex optimization. We
present extensive (real and simulated) experiments supporting our claims and
discuss practical relevance and open problems.Comment: 10 pages, 4 figure
An Effective Multi-Cue Positioning System for Agricultural Robotics
The self-localization capability is a crucial component for Unmanned Ground
Vehicles (UGV) in farming applications. Approaches based solely on visual cues
or on low-cost GPS are easily prone to fail in such scenarios. In this paper,
we present a robust and accurate 3D global pose estimation framework, designed
to take full advantage of heterogeneous sensory data. By modeling the pose
estimation problem as a pose graph optimization, our approach simultaneously
mitigates the cumulative drift introduced by motion estimation systems (wheel
odometry, visual odometry, ...), and the noise introduced by raw GPS readings.
Along with a suitable motion model, our system also integrates two additional
types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random
Field assumption. We demonstrate how using these additional cues substantially
reduces the error along the altitude axis and, moreover, how this benefit
spreads to the other components of the state. We report exhaustive experiments
combining several sensor setups, showing accuracy improvements ranging from 37%
to 76% with respect to the exclusive use of a GPS sensor. We show that our
approach provides accurate results even if the GPS unexpectedly changes
positioning mode. The code of our system along with the acquired datasets are
released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters,
201
- …