665 research outputs found
Nonlinear stability of the ensemble Kalman filter with adaptive covariance inflation
The Ensemble Kalman filter and Ensemble square root filters are data
assimilation methods used to combine high dimensional nonlinear models with
observed data. These methods have proved to be indispensable tools in science
and engineering as they allow computationally cheap, low dimensional ensemble
state approximation for extremely high dimensional turbulent forecast models.
From a theoretical perspective, these methods are poorly understood, with the
exception of a recently established but still incomplete nonlinear stability
theory. Moreover, recent numerical and theoretical studies of catastrophic
filter divergence have indicated that stability is a genuine mathematical
concern and can not be taken for granted in implementation. In this article we
propose a simple modification of ensemble based methods which resolves these
stability issues entirely. The method involves a new type of adaptive
covariance inflation, which comes with minimal additional cost. We develop a
complete nonlinear stability theory for the adaptive method, yielding Lyapunov
functions and geometric ergodicity under weak assumptions. We present numerical
evidence which suggests the adaptive methods have improved accuracy over
standard methods and completely eliminate catastrophic filter divergence. This
enhanced stability allows for the use of extremely cheap, unstable forecast
integrators, which would otherwise lead to widespread filter malfunction.Comment: 34 pages. 4 figure
Localization for MCMC: sampling high-dimensional posterior distributions with local structure
We investigate how ideas from covariance localization in numerical weather
prediction can be used in Markov chain Monte Carlo (MCMC) sampling of
high-dimensional posterior distributions arising in Bayesian inverse problems.
To localize an inverse problem is to enforce an anticipated "local" structure
by (i) neglecting small off-diagonal elements of the prior precision and
covariance matrices; and (ii) restricting the influence of observations to
their neighborhood. For linear problems we can specify the conditions under
which posterior moments of the localized problem are close to those of the
original problem. We explain physical interpretations of our assumptions about
local structure and discuss the notion of high dimensionality in local
problems, which is different from the usual notion of high dimensionality in
function space MCMC. The Gibbs sampler is a natural choice of MCMC algorithm
for localized inverse problems and we demonstrate that its convergence rate is
independent of dimension for localized linear problems. Nonlinear problems can
also be tackled efficiently by localization and, as a simple illustration of
these ideas, we present a localized Metropolis-within-Gibbs sampler. Several
linear and nonlinear numerical examples illustrate localization in the context
of MCMC samplers for inverse problems.Comment: 33 pages, 5 figure
- …