6,088 research outputs found

    Discretizing Distributions with Exact Moments: Error Estimate and Convergence Analysis

    Full text link
    The maximum entropy principle is a powerful tool for solving underdetermined inverse problems. This paper considers the problem of discretizing a continuous distribution, which arises in various applied fields. We obtain the approximating distribution by minimizing the Kullback-Leibler information (relative entropy) of the unknown discrete distribution relative to an initial discretization based on a quadrature formula subject to some moment constraints. We study the theoretical error bound and the convergence of this approximation method as the number of discrete points increases. We prove that (i) the theoretical error bound of the approximate expectation of any bounded continuous function has at most the same order as the quadrature formula we start with, and (ii) the approximate discrete distribution weakly converges to the given continuous distribution. Moreover, we present some numerical examples that show the advantage of the method and apply to numerically solving an optimal portfolio problem.Comment: 20 pages, 14 figure

    Multilevel Double Loop Monte Carlo and Stochastic Collocation Methods with Importance Sampling for Bayesian Optimal Experimental Design

    Full text link
    An optimal experimental set-up maximizes the value of data for statistical inferences and predictions. The efficiency of strategies for finding optimal experimental set-ups is particularly important for experiments that are time-consuming or expensive to perform. For instance, in the situation when the experiments are modeled by Partial Differential Equations (PDEs), multilevel methods have been proven to dramatically reduce the computational complexity of their single-level counterparts when estimating expected values. For a setting where PDEs can model experiments, we propose two multilevel methods for estimating a popular design criterion known as the expected information gain in simulation-based Bayesian optimal experimental design. The expected information gain criterion is of a nested expectation form, and only a handful of multilevel methods have been proposed for problems of such form. We propose a Multilevel Double Loop Monte Carlo (MLDLMC), which is a multilevel strategy with Double Loop Monte Carlo (DLMC), and a Multilevel Double Loop Stochastic Collocation (MLDLSC), which performs a high-dimensional integration by deterministic quadrature on sparse grids. For both methods, the Laplace approximation is used for importance sampling that significantly reduces the computational work of estimating inner expectations. The optimal values of the method parameters are determined by minimizing the average computational work, subject to satisfying the desired error tolerance. The computational efficiencies of the methods are demonstrated by estimating the expected information gain for Bayesian inference of the fiber orientation in composite laminate materials from an electrical impedance tomography experiment. MLDLSC performs better than MLDLMC when the regularity of the quantity of interest, with respect to the additive noise and the unknown parameters, can be exploited
    • …
    corecore