318 research outputs found

    Limitations of polynomial chaos expansions in the Bayesian solution of inverse problems

    Full text link
    Polynomial chaos expansions are used to reduce the computational cost in the Bayesian solutions of inverse problems by creating a surrogate posterior that can be evaluated inexpensively. We show, by analysis and example, that when the data contain significant information beyond what is assumed in the prior, the surrogate posterior can be very different from the posterior, and the resulting estimates become inaccurate. One can improve the accuracy by adaptively increasing the order of the polynomial chaos, but the cost may increase too fast for this to be cost effective compared to Monte Carlo sampling without a surrogate posterior

    Parameter estimation by implicit sampling

    Full text link
    Implicit sampling is a weighted sampling method that is used in data assimilation, where one sequentially updates estimates of the state of a stochastic model based on a stream of noisy or incomplete data. Here we describe how to use implicit sampling in parameter estimation problems, where the goal is to find parameters of a numerical model, e.g.~a partial differential equation (PDE), such that the output of the numerical model is compatible with (noisy) data. We use the Bayesian approach to parameter estimation, in which a posterior probability density describes the probability of the parameter conditioned on data and compute an empirical estimate of this posterior with implicit sampling. Our approach generates independent samples, so that some of the practical difficulties one encounters with Markov Chain Monte Carlo methods, e.g.~burn-in time or correlations among dependent samples, are avoided. We describe a new implementation of implicit sampling for parameter estimation problems that makes use of multiple grids (coarse to fine) and BFGS optimization coupled to adjoint equations for the required gradient calculations. The implementation is "dimension independent", in the sense that a well-defined finite dimensional subspace is sampled as the mesh used for discretization of the PDE is refined. We illustrate the algorithm with an example where we estimate a diffusion coefficient in an elliptic equation from sparse and noisy pressure measurements. In the example, dimension\slash mesh-independence is achieved via Karhunen-Lo\`{e}ve expansions

    An iterative implementation of the implicit nonlinear filter

    Get PDF
    This is the published version, also available here: http://dx.doi.org/10.1051/m2an/2011055.Implicit sampling is a sampling scheme for particle filters, designed to move particles one-by-one so that they remain in high-probability domains. We present a new derivation of implicit sampling, as well as a new iteration method for solving the resulting algebraic equations

    Accounting for Model Error from Unresolved Scales in Ensemble Kalman Filters by Stochastic Parameterization

    Get PDF
    The use of discrete-time stochastic parameterization to account for model error due to unresolved scales in ensemble Kalman filters is investigated by numerical experiments. The parameterization quantifies the model error and produces an improved non-Markovian forecast model, which generates high quality forecast ensembles and improves filter performance. Results are compared with the methods of dealing with model error through covariance inflation and localization (IL), using as an example the two-layer Lorenz-96 system. The numerical results show that when the ensemble size is sufficiently large, the parameterization is more effective in accounting for the model error than IL; if the ensemble size is small, IL is needed to reduce sampling error, but the parameterization further improves the performance of the filter. This suggests that in real applications where the ensemble size is relatively small, the filter can achieve better performance than pure IL if stochastic parameterization methods are combined with IL
    • …
    corecore