14 research outputs found

    Optimal projection of observations in a Bayesian setting

    Full text link
    Optimal dimensionality reduction methods are proposed for the Bayesian inference of a Gaussian linear model with additive noise in presence of overabundant data. Three different optimal projections of the observations are proposed based on information theory: the projection that minimizes the Kullback-Leibler divergence between the posterior distributions of the original and the projected models, the one that minimizes the expected Kullback-Leibler divergence between the same distributions, and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using Riemannian optimization algorithms on the Grassmann manifold. Regarding the maximization of the mutual information, it is shown that there exists an optimal subspace that minimizes the entropy of the posterior distribution of the reduced model; a basis of the subspace can be computed as the solution to a generalized eigenvalue problem; an a priori error estimate on the mutual information is available for this particular solution; and that the dimensionality of the subspace to exactly conserve the mutual information between the input and the output of the models is less than the number of parameters to be inferred. Numerical applications to linear and nonlinear models are used to assess the efficiency of the proposed approaches, and to highlight their advantages compared to standard approaches based on the principal component analysis of the observations

    Coordinate Transformation and Polynomial Chaos for the Bayesian Inference of a Gaussian Process with Parametrized Prior Covariance Function

    Full text link
    This paper addresses model dimensionality reduction for Bayesian inference based on prior Gaussian fields with uncertainty in the covariance function hyper-parameters. The dimensionality reduction is traditionally achieved using the Karhunen-\Loeve expansion of a prior Gaussian process assuming covariance function with fixed hyper-parameters, despite the fact that these are uncertain in nature. The posterior distribution of the Karhunen-Lo\`{e}ve coordinates is then inferred using available observations. The resulting inferred field is therefore dependent on the assumed hyper-parameters. Here, we seek to efficiently estimate both the field and covariance hyper-parameters using Bayesian inference. To this end, a generalized Karhunen-Lo\`{e}ve expansion is derived using a coordinate transformation to account for the dependence with respect to the covariance hyper-parameters. Polynomial Chaos expansions are employed for the acceleration of the Bayesian inference using similar coordinate transformations, enabling us to avoid expanding explicitly the solution dependence on the uncertain hyper-parameters. We demonstrate the feasibility of the proposed method on a transient diffusion equation by inferring spatially-varying log-diffusivity fields from noisy data. The inferred profiles were found closer to the true profiles when including the hyper-parameters' uncertainty in the inference formulation.Comment: 34 pages, 17 figure

    Cardiovascular Modeling With Adapted Parametric Inference

    No full text
    Computational modeling of the cardiovascular system, promoted by the advance of fluid-structure interaction numerical methods, has made great progress towards the development of patient-specific numerical aids to diagnosis, risk prediction, intervention and clinical treatment. Nevertheless, the reliability of these models is inevitably impacted by rough modeling assumptions. A strong in-tegration of patient-specific data into numerical modeling is therefore needed in order to improve the accuracy of the predictions through the calibration of important physiological parameters. The Bayesian statistical framework to inverse problems is a powerful approach that relies on posterior sampling techniques, such as Markov chain Monte Carlo algorithms. The generation of samples re-quires many evaluations of the cardiovascular parameter-to-observable model. In practice, the use of a full cardiovascular numerical model is prohibitively expensive and a computational strategy based on approximations of the system response, or surrogate models, is needed to perform the data as-similation. As the support of the parameters distribution typically concentrates on a small fraction of the initial prior distribution, a worthy improvement consists in gradually adapting the surrogate model to minimize the approximation error for parameter values corresponding to high posterior den-sity. We introduce a novel numerical pathway to construct a series of polynomial surrogate models, by regression, using samples drawn from a sequence of distributions likely to converge to the posterior distribution. The approach yields substantial gains in efficiency and accuracy over direct prior-based surrogate models, as demonstrated via application to pulse wave velocities identification in a human lower limb arterial network

    Variance reduction methods and multilevel Monte Carlo strategy for the density estimation of random second order linear differential equations solutions

    Full text link
    [EN] This paper concerns the estimation of the density function of the solution to a random nonautonomous second-order linear differential equation with analytic data processes. In a recent contribution, we proposed to express the density function as an expectation, and we used a standard Monte Carlo algorithm to approximate the expectation. Although the algorithms worked satisfactorily for most test problems, some numerical challenges emerged for others, due to large statistical errors. In these situations, the convergence of the Monte Carlo simulation slows down severely, and noisy features plague the estimates. In this paper, we focus on computational aspects and propose several variance reduction methods to remedy these issues and speed up the convergence. First, we introduce a pathwise selection of the approximating processes which aims at controlling the variance of the estimator. Second, we propose a hybrid method, combining Monte Carlo and deterministic quadrature rules, to estimate the expectation. Third, we exploit the series expansions of the solutions to design a multilevel Monte Carlo estimator. The proposed methods are implemented and tested on several numerical examples to highlight the theoretical discussions and demonstrate the significant improvements achieved.This work is supported by Spanish "Ministerio de Economia y Competitividad" Grant No. MTM2017-89664P.M. Jornet acknowledges the doctorate scholarship granted by PAID, as well as "Ayudas movilidad de estudiantes de doctorado de la Universitat Polit`ecnica de Valencia para estancias en 2019," for financing his research stay at CMAP. J. Calatayud acknowledges "Fundacio Ferran Sunyer i Balaguer," "Institut d'Estudis Catalans," and the award from "Borses Ferran Sunyer i Balaguer 2019" for funding her research stay at CMAP. All authors are grateful to Inria (Centre de Saclay, DeFi Team), which hosted M. Jornet and J. Calatayud during their research stays at Ecole Polytechnique. The authors thank the reviewers for the valuable comments and suggestions, which have greatly enriched the quality of the paper.Jornet, M.; Calatayud, J.; Le Maître, OP.; Cortés, J. (2020). Variance reduction methods and multilevel Monte Carlo strategy for the density estimation of random second order linear differential equations solutions. International Journal for Uncertainty Quantification. 10(5):467-497. https://doi.org/10.1615/Int.J.UncertaintyQuantification.2020032659S46749710

    A Stochastic Projection Method for Fluid Flow I. Basic Formulation

    No full text
    We describe the construction and implementation of a stochastic Navier-Stokes..
    corecore