36,748 research outputs found
Data Assimilation: A Mathematical Introduction
These notes provide a systematic mathematical treatment of the subject of
data assimilation
Inverse Problems and Data Assimilation
These notes are designed with the aim of providing a clear and concise
introduction to the subjects of Inverse Problems and Data Assimilation, and
their inter-relations, together with citations to some relevant literature in
this area. The first half of the notes is dedicated to studying the Bayesian
framework for inverse problems. Techniques such as importance sampling and
Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the
desirable property that in the limit of an infinite number of samples they
reproduce the full posterior distribution. Since it is often computationally
intensive to implement these methods, especially in high dimensional problems,
approximate techniques such as approximating the posterior by a Dirac or a
Gaussian distribution are discussed. The second half of the notes cover data
assimilation. This refers to a particular class of inverse problems in which
the unknown parameter is the initial condition of a dynamical system, and in
the stochastic dynamics case the subsequent states of the system, and the data
comprises partial and noisy observations of that (possibly stochastic)
dynamical system. We will also demonstrate that methods developed in data
assimilation may be employed to study generic inverse problems, by introducing
an artificial time to generate a sequence of probability measures interpolating
from the prior to the posterior
Optimal waveform estimation for classical and quantum systems via time-symmetric smoothing
Classical and quantum theories of time-symmetric smoothing, which can be used
to optimally estimate waveforms in classical and quantum systems, are derived
using a discrete-time approach, and the similarities between the two theories
are emphasized. Application of the quantum theory to homodyne phase-locked loop
design for phase estimation with narrowband squeezed optical beams is studied.
The relation between the proposed theory and Aharonov et al.'s weak value
theory is also explored.Comment: 13 pages, 5 figures, v2: changed the title to a more descriptive one,
corrected a minor mistake in Sec. IV, accepted by Physical Review
A Unifying review of linear gaussian models
Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models
A 4D-Var Method with Flow-Dependent Background Covariances for the Shallow-Water Equations
The 4D-Var method for filtering partially observed nonlinear chaotic
dynamical systems consists of finding the maximum a-posteriori (MAP) estimator
of the initial condition of the system given observations over a time window,
and propagating it forward to the current time via the model dynamics. This
method forms the basis of most currently operational weather forecasting
systems. In practice the optimization becomes infeasible if the time window is
too long due to the non-convexity of the cost function, the effect of model
errors, and the limited precision of the ODE solvers. Hence the window has to
be kept sufficiently short, and the observations in the previous windows can be
taken into account via a Gaussian background (prior) distribution. The choice
of the background covariance matrix is an important question that has received
much attention in the literature. In this paper, we define the background
covariances in a principled manner, based on observations in the previous
assimilation windows, for a parameter . The method is at most times
more computationally expensive than using fixed background covariances,
requires little tuning, and greatly improves the accuracy of 4D-Var. As a
concrete example, we focus on the shallow-water equations. The proposed method
is compared against state-of-the-art approaches in data assimilation and is
shown to perform favourably on simulated data. We also illustrate our approach
on data from the recent tsunami of 2011 in Fukushima, Japan.Comment: 32 pages, 5 figure
Optimization viewpoint on Kalman smoothing, with applications to robust and sparse estimation
In this paper, we present the optimization formulation of the Kalman
filtering and smoothing problems, and use this perspective to develop a variety
of extensions and applications. We first formulate classic Kalman smoothing as
a least squares problem, highlight special structure, and show that the classic
filtering and smoothing algorithms are equivalent to a particular algorithm for
solving this problem. Once this equivalence is established, we present
extensions of Kalman smoothing to systems with nonlinear process and
measurement models, systems with linear and nonlinear inequality constraints,
systems with outliers in the measurements or sudden changes in the state, and
systems where the sparsity of the state sequence must be accounted for. All
extensions preserve the computational efficiency of the classic algorithms, and
most of the extensions are illustrated with numerical examples, which are part
of an open source Kalman smoothing Matlab/Octave package.Comment: 46 pages, 11 figure
Active Classification for POMDPs: a Kalman-like State Estimator
The problem of state tracking with active observation control is considered
for a system modeled by a discrete-time, finite-state Markov chain observed
through conditionally Gaussian measurement vectors. The measurement model
statistics are shaped by the underlying state and an exogenous control input,
which influence the observations' quality. Exploiting an innovations approach,
an approximate minimum mean-squared error (MMSE) filter is derived to estimate
the Markov chain system state. To optimize the control strategy, the associated
mean-squared error is used as an optimization criterion in a partially
observable Markov decision process formulation. A stochastic dynamic
programming algorithm is proposed to solve for the optimal solution. To enhance
the quality of system state estimates, approximate MMSE smoothing estimators
are also derived. Finally, the performance of the proposed framework is
illustrated on the problem of physical activity detection in wireless body
sensing networks. The power of the proposed framework lies within its ability
to accommodate a broad spectrum of active classification applications including
sensor management for object classification and tracking, estimation of sparse
signals and radar scheduling.Comment: 38 pages, 6 figure
- …