6,753 research outputs found
Kalman Filter Tuning with Bayesian Optimization
Many state estimation algorithms must be tuned given the state space process and observation models, the process and observation noise parameters must be chosen. Conventional tuning approaches rely on heuristic hand-tuning or gradient-based optimization techniques to minimize a performance cost function. However, the relationship between tuned noise values and estimator performance is highly nonlinear and stochastic. Therefore, the tuning solutions can easily get trapped in local minima, which can lead to poor choices of noise parameters and suboptimal estimator performance. This paper describes how Bayesian Optimization (BO) can overcome these issues. BO poses optimization as a Bayesian search problem for a stochastic ``black box'' cost function, where the goal is to search the solution space to maximize the probability of improving the current best solution. As such, BO offers a principled approach to optimization-based estimator tuning in the presence of local minima and performance stochasticity. While extended Kalman filters (EKFs) are the main focus of this work, BO can be similarly used to tune other related state space filters. The method presented here uses performance metrics derived from normalized innovation squared (NIS) filter residuals obtained via sensor data, which renders knowledge of ground-truth states unnecessary. The robustness, accuracy, and reliability of BO-based tuning is illustrated on practical nonlinear state estimation problems,losed-loop aero-robotic control
Weak in the NEES?: Auto-tuning Kalman Filters with Bayesian Optimization
Kalman filters are routinely used for many data fusion applications including
navigation, tracking, and simultaneous localization and mapping problems.
However, significant time and effort is frequently required to tune various
Kalman filter model parameters, e.g. process noise covariance, pre-whitening
filter models for non-white noise, etc. Conventional optimization techniques
for tuning can get stuck in poor local minima and can be expensive to implement
with real sensor data. To address these issues, a new "black box" Bayesian
optimization strategy is developed for automatically tuning Kalman filters. In
this approach, performance is characterized by one of two stochastic objective
functions: normalized estimation error squared (NEES) when ground truth state
models are available, or the normalized innovation error squared (NIS) when
only sensor data is available. By intelligently sampling the parameter space to
both learn and exploit a nonparametric Gaussian process surrogate function for
the NEES/NIS costs, Bayesian optimization can efficiently identify multiple
local minima and provide uncertainty quantification on its results.Comment: Final version presented at FUSION 2018 Conference, Cambridge, UK,
July 2018 (submitted June 1, 2018
Weak in the NEES?: Auto-Tuning Kalman Filters with Bayesian Optimization
ISIF Kalman filters are routinely used for many data fusion applications including navigation, tracking, and simultaneous localization and mapping problems. However, significant time and effort is frequently required to tune various Kalman filter model parameters, e.g. Process noise covariance, pre-whitening filter models for non-white noise, etc. Conventional optimization techniques for tuning can get stuck in poor local minima and can be expensive to implement with real sensor data. To address these issues, a new 'black box' Bayesian optimization strategy is developed for automatically tuning Kalman filters. In this approach, performance is characterized by one of two stochastic objective functions: Normalized estimation error squared (NEES) when ground truth state models are available, or the normalized innovation error squared (NIS) when only sensor data is available. By intelligently sampling the parameter space to both learn and exploit a nonparametric Gaussian process surrogate function for the NEESINIS costs, Bayesian optimization can efficiently identify multiple local minima and provide uncertainty quantification on its results
State-space solutions to the dynamic magnetoencephalography inverse problem using high performance computing
Determining the magnitude and location of neural sources within the brain
that are responsible for generating magnetoencephalography (MEG) signals
measured on the surface of the head is a challenging problem in functional
neuroimaging. The number of potential sources within the brain exceeds by an
order of magnitude the number of recording sites. As a consequence, the
estimates for the magnitude and location of the neural sources will be
ill-conditioned because of the underdetermined nature of the problem. One
well-known technique designed to address this imbalance is the minimum norm
estimator (MNE). This approach imposes an regularization constraint that
serves to stabilize and condition the source parameter estimates. However,
these classes of regularizer are static in time and do not consider the
temporal constraints inherent to the biophysics of the MEG experiment. In this
paper we propose a dynamic state-space model that accounts for both spatial and
temporal correlations within and across candidate intracortical sources. In our
model, the observation model is derived from the steady-state solution to
Maxwell's equations while the latent model representing neural dynamics is
given by a random walk process.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS483 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …