3,159 research outputs found
A Spectral Learning Approach to Range-Only SLAM
We present a novel spectral learning algorithm for simultaneous localization
and mapping (SLAM) from range data with known correspondences. This algorithm
is an instance of a general spectral system identification framework, from
which it inherits several desirable properties, including statistical
consistency and no local optima. Compared with popular batch optimization or
multiple-hypothesis tracking (MHT) methods for range-only SLAM, our spectral
approach offers guaranteed low computational requirements and good tracking
performance. Compared with popular extended Kalman filter (EKF) or extended
information filter (EIF) approaches, and many MHT ones, our approach does not
need to linearize a transition or measurement model; such linearizations can
cause severe errors in EKFs and EIFs, and to a lesser extent MHT, particularly
for the highly non-Gaussian posteriors encountered in range-only SLAM. We
provide a theoretical analysis of our method, including finite-sample error
bounds. Finally, we demonstrate on a real-world robotic SLAM problem that our
algorithm is not only theoretically justified, but works well in practice: in a
comparison of multiple methods, the lowest errors come from a combination of
our algorithm with batch optimization, but our method alone produces nearly as
good a result at far lower computational cost
Predictive-State Decoders: Encoding the Future into Recurrent Networks
Recurrent neural networks (RNNs) are a vital modeling technique that rely on
internal states learned indirectly by optimization of a supervised,
unsupervised, or reinforcement training loss. RNNs are used to model dynamic
processes that are characterized by underlying latent states whose form is
often unknown, precluding its analytic representation inside an RNN. In the
Predictive-State Representation (PSR) literature, latent state processes are
modeled by an internal state representation that directly models the
distribution of future observations, and most recent work in this area has
relied on explicitly representing and targeting sufficient statistics of this
probability distribution. We seek to combine the advantages of RNNs and PSRs by
augmenting existing state-of-the-art recurrent neural networks with
Predictive-State Decoders (PSDs), which add supervision to the network's
internal state representation to target predicting future observations.
Predictive-State Decoders are simple to implement and easily incorporated into
existing training pipelines via additional loss regularization. We demonstrate
the effectiveness of PSDs with experimental results in three different domains:
probabilistic filtering, Imitation Learning, and Reinforcement Learning. In
each, our method improves statistical performance of state-of-the-art recurrent
baselines and does so with fewer iterations and less data.Comment: NIPS 201
Diffusion Maps Kalman Filter for a Class of Systems with Gradient Flows
In this paper, we propose a non-parametric method for state estimation of
high-dimensional nonlinear stochastic dynamical systems, which evolve according
to gradient flows with isotropic diffusion. We combine diffusion maps, a
manifold learning technique, with a linear Kalman filter and with concepts from
Koopman operator theory. More concretely, using diffusion maps, we construct
data-driven virtual state coordinates, which linearize the system model. Based
on these coordinates, we devise a data-driven framework for state estimation
using the Kalman filter. We demonstrate the strengths of our method with
respect to both parametric and non-parametric algorithms in three tracking
problems. In particular, applying the approach to actual recordings of
hippocampal neural activity in rodents directly yields a representation of the
position of the animals. We show that the proposed method outperforms competing
non-parametric algorithms in the examined stochastic problem formulations.
Additionally, we obtain results comparable to classical parametric algorithms,
which, in contrast to our method, are equipped with model knowledge.Comment: 15 pages, 12 figures, submitted to IEEE TS
Data-driven model reduction and transfer operator approximation
In this review paper, we will present different data-driven dimension
reduction techniques for dynamical systems that are based on transfer operator
theory as well as methods to approximate transfer operators and their
eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out
similarities and differences between methods developed independently by the
dynamical systems, fluid dynamics, and molecular dynamics communities such as
time-lagged independent component analysis (TICA), dynamic mode decomposition
(DMD), and their respective generalizations. As a result, extensions and best
practices developed for one particular method can be carried over to other
related methods
- …