1,110 research outputs found

    Inverse Problems and Data Assimilation

    Full text link
    These notes are designed with the aim of providing a clear and concise introduction to the subjects of Inverse Problems and Data Assimilation, and their inter-relations, together with citations to some relevant literature in this area. The first half of the notes is dedicated to studying the Bayesian framework for inverse problems. Techniques such as importance sampling and Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the desirable property that in the limit of an infinite number of samples they reproduce the full posterior distribution. Since it is often computationally intensive to implement these methods, especially in high dimensional problems, approximate techniques such as approximating the posterior by a Dirac or a Gaussian distribution are discussed. The second half of the notes cover data assimilation. This refers to a particular class of inverse problems in which the unknown parameter is the initial condition of a dynamical system, and in the stochastic dynamics case the subsequent states of the system, and the data comprises partial and noisy observations of that (possibly stochastic) dynamical system. We will also demonstrate that methods developed in data assimilation may be employed to study generic inverse problems, by introducing an artificial time to generate a sequence of probability measures interpolating from the prior to the posterior

    Surrogate-Based Bayesian Inverse Modeling of the Hydrological System: An Adaptive Approach Considering Surrogate Approximation Error

    Full text link
    Bayesian inverse modeling is important for a better understanding of hydrological processes. However, this approach can be computationally demanding, as it usually requires a large number of model evaluations. To address this issue, one can take advantage of surrogate modeling techniques. Nevertheless, when approximation error of the surrogate model is neglected, the inversion result will be biased. In this paper, we develop a surrogate-based Bayesian inversion framework that explicitly quantifies and gradually reduces the approximation error of the surrogate. Specifically, two strategies are proposed to quantify the surrogate error. The first strategy works by quantifying the surrogate prediction uncertainty with a Bayesian method, while the second strategy uses another surrogate to simulate and correct the approximation error of the primary surrogate. By adaptively refining the surrogate over the posterior distribution, we can gradually reduce the surrogate approximation error to a small level. Demonstrated with three case studies involving high dimensionality, multimodality, and a real-world application, it is found that both strategies can reduce the bias introduced by surrogate approximation error, while the second strategy that integrates two methods (i.e., polynomial chaos expansion and Gaussian process in this work) that complement each other shows the best performance.Comment: 60 pages, 14 figure

    Deterministic Mean-field Ensemble Kalman Filtering

    Full text link
    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Legland etal. (2011) is extended to non-Gaussian state space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ\kappa between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF when the dimension d<2κd<2\kappa. The fidelity of approximation of the true distribution is also established using an extension of total variation metric to random measures. This is limited by a Gaussian bias term arising from non-linearity/non-Gaussianity of the model, which exists for both DMFEnKF and standard EnKF. Numerical results support and extend the theory

    Ensemble Kalman methods for high-dimensional hierarchical dynamic space-time models

    Full text link
    We propose a new class of filtering and smoothing methods for inference in high-dimensional, nonlinear, non-Gaussian, spatio-temporal state-space models. The main idea is to combine the ensemble Kalman filter and smoother, developed in the geophysics literature, with state-space algorithms from the statistics literature. Our algorithms address a variety of estimation scenarios, including on-line and off-line state and parameter estimation. We take a Bayesian perspective, for which the goal is to generate samples from the joint posterior distribution of states and parameters. The key benefit of our approach is the use of ensemble Kalman methods for dimension reduction, which allows inference for high-dimensional state vectors. We compare our methods to existing ones, including ensemble Kalman filters, particle filters, and particle MCMC. Using a real data example of cloud motion and data simulated under a number of nonlinear and non-Gaussian scenarios, we show that our approaches outperform these existing methods

    Multilevel ensemble Kalman filtering for spatio-temporal processes

    Full text link
    We design and analyse the performance of a multilevel ensemble Kalman filter method (MLEnKF) for filtering settings where the underlying state-space model is an infinite-dimensional spatio-temporal process. We consider underlying models that needs to be simulated by numerical methods, with discretization in both space and time. The multilevel Monte Carlo (MLMC) sampling strategy, achieving variance reduction through pairwise coupling of ensemble particles on neighboring resolutions, is used in the sample-moment step of MLEnKF to produce an efficient hierarchical filtering method for spatio-temporal models. Under sufficient regularity, MLEnKF is proven to be more efficient for weak approximations than EnKF, asymptotically in the large-ensemble and fine-numerical-resolution limit. Numerical examples support our theoretical findings.Comment: Version 1: 39 pages, 4 figures.arXiv admin note: substantial text overlap with arXiv:1608.08558 . Version 2 (this version): 52 pages, 6 figures. Revision primarily of the introduction and the numerical examples sectio

    Data Assimilation and Inverse Problems

    Get PDF
    These notes are designed with the aim of providing a clear and concise introduction to the subjects of Inverse Problems and Data Assimilation, and their inter-relations, together with citations to some relevant literature in this area. The first half of the notes is dedicated to studying the Bayesian framework for inverse problems. Techniques such as importance sampling and Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the desirable property that in the limit of an infinite number of samples they reproduce the full posterior distribution. Since it is often computationally intensive to implement these methods, especially in high dimensional problems, approximate techniques such as approximating the posterior by a Dirac or a Gaussian distribution are discussed. The second half of the notes cover data assimilation. This refers to a particular class of inverse problems in which the unknown parameter is the initial condition of a dynamical system, and in the stochastic dynamics case the subsequent states of the system, and the data comprises partial and noisy observations of that (possibly stochastic) dynamical system. We will also demonstrate that methods developed in data assimilation may be employed to study generic inverse problems, by introducing an artificial time to generate a sequence of probability measures interpolating from the prior to the posterior

    Analysing time dependent problems

    Get PDF
    Inverse analysis for time dependent problems is discussed i n this chapter. When time dependent processes are analysed, further uncertainties c ome from initial conditions as well as from time dependent boundary conditions and loads , in addition to model parameters. Inverse modelling techniques have been specifi cally developed for this class of problems, which exploit the availability of a set of measurement and/or mon- itoring data at given locations at subsequent time instants . Sequential Bayesian data assimilation is introduced, and a brief review of filtering t echniques is given. In fil- tering the problem unknown is the time evolution of the proba bility density function of the system state, described by means of appropriate time d ependent variables and time invariant parameters, conditioned to all previous obs ervations. Particle filtering is chosen to conceptually illustrate the methodology, by me ans of two simple introduc- tory examples
    • …
    corecore