8 research outputs found

    Adding Constraints to Bayesian Inverse Problems

    Full text link
    Using observation data to estimate unknown parameters in computational models is broadly important. This task is often challenging because solutions are non-unique due to the complexity of the model and limited observation data. However, the parameters or states of the model are often known to satisfy additional constraints beyond the model. Thus, we propose an approach to improve parameter estimation in such inverse problems by incorporating constraints in a Bayesian inference framework. Constraints are imposed by constructing a likelihood function based on fitness of the solution to the constraints. The posterior distribution of the parameters conditioned on (1) the observed data and (2) satisfaction of the constraints is obtained, and the estimate of the parameters is given by the maximum a posteriori estimation or posterior mean. Both equality and inequality constraints can be considered by this framework, and the strictness of the constraints can be controlled by constraint uncertainty denoting a confidence on its correctness. Furthermore, we extend this framework to an approximate Bayesian inference framework in terms of the ensemble Kalman filter method, where the constraint is imposed by re-weighing the ensemble members based on the likelihood function. A synthetic model is presented to demonstrate the effectiveness of the proposed method and in both the exact Bayesian inference and ensemble Kalman filter scenarios, numerical simulations show that imposing constraints using the method presented improves identification of the true parameter solution among multiple local minima.Comment: Accepted by 2019 AAAI conferenc

    Ensemble Kalman Inversion for Sparse Learning of Dynamical Systems from Time-Averaged Data

    Get PDF
    Enforcing sparse structure within learning has led to significant advances in the field of data-driven discovery of dynamical systems. However, such methods require access not only to time-series of the state of the dynamical system, but also to the time derivative. In many applications, the data are available only in the form of time-averages such as moments and autocorrelation functions. We propose a sparse learning methodology to discover the vector fields defining a (possibly stochastic or partial) differential equation, using only time-averaged statistics. Such a formulation of sparse learning naturally leads to a nonlinear inverse problem to which we apply the methodology of ensemble Kalman inversion (EKI). EKI is chosen because it may be formulated in terms of the iterative solution of quadratic optimization problems; sparsity is then easily imposed. We then apply the EKI-based sparse learning methodology to various examples governed by stochastic differential equations (a noisy Lorenz 63 system), ordinary differential equations (Lorenz 96 system and coalescence equations), and a partial differential equation (the Kuramoto-Sivashinsky equation). The results demonstrate that time-averaged statistics can be used for data-driven discovery of differential equations using sparse EKI. The proposed sparse learning methodology extends the scope of data-driven discovery of differential equations to previously challenging applications and data-acquisition scenarios

    Adding Constraints to Bayesian Inverse Problems

    No full text
    corecore