24 research outputs found

    Estimation of Distribution Overlap of Urn Models

    Get PDF
    A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in k draws from another distribution. We show our estimator of dissimilarity to be a U-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of k. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over k, we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.Comment: 27 pages, 4 figure

    A Massively Parallel Implementation of Multilevel Monte Carlo for Finite Element Models

    Full text link
    The Multilevel Monte Carlo (MLMC) method has proven to be an effective variance-reduction statistical method for Uncertainty Quantification (UQ) in Partial Differential Equation (PDE) models, combining model computations at different levels to create an accurate estimate. Still, the computational complexity of the resulting method is extremely high, particularly for 3D models, which requires advanced algorithms for the efficient exploitation of High Performance Computing (HPC). In this article we present a new implementation of the MLMC in massively parallel computer architectures, exploiting parallelism within and between each level of the hierarchy. The numerical approximation of the PDE is performed using the finite element method but the algorithm is quite general and could be applied to other discretization methods as well, although the focus is on parallel sampling. The two key ingredients of an efficient parallel implementation are a good processor partition scheme together with a good scheduling algorithm to assign work to different processors. We introduce a multiple partition of the set of processors that permits the simultaneous execution of different levels and we develop a dynamic scheduling algorithm to exploit it. The problem of finding the optimal scheduling of distributed tasks in a parallel computer is an NP-complete problem. We propose and analyze a new greedy scheduling algorithm to assign samples and we show that it is a 2-approximation, which is the best that may be expected under general assumptions. On top of this result we design a distributed memory implementation using the Message Passing Interface (MPI) standard. Finally we present a set of numerical experiments illustrating its scalability properties.Comment: 21 pages, 13 figure

    Embedded multilevel Monte Carlo for uncertainty quantification in random domains

    Get PDF
    The multilevel Monte Carlo (MLMC) method has proven to be an effective variance-reduction statistical method for Uncertainty quantification in PDE models. It combines approximations at different levels of accuracy using a hierarchy of meshes in a similar way as multigrid. The generation of body-fitted mesh hierarchies is only possible for simple geometries. On top of that, MLMC for random domains involves the generation of a mesh for every sample. Instead, here we consider the use of embedded methods which make use of simple background meshes of an artificial domain (a bounding-box) for which it is easy to define a mesh hierarchy, thus eliminating the need of body-fitted unstructured meshes, but can produce ill-conditioned discrete problems. To avoid this complication, we consider the recent aggregated finite element method (AgFEM). In particular, we design an embedded MLMC framework for (geometrically and topologically) random domains implicitly defined through a random level-set function, which makes use of a set of hierarchical background meshes and the AgFEM. Performance predictions from existing theory are verified statistically in three numerical experiments, namely the solution of the Poisson equation on a circular domain of random radius, the solution of the Poisson equation on a topologically identical but more complex domain, and the solution of a heat-transfer problem in a domain that has geometric and topological uncertainties. Finally, the use of AgFE is statistically demonstrated to be crucial for complex and uncertain geometries in terms of robustness and computational cost. Date: November 28, 2019

    A massively parallel implementation of multilevel Monte Carlo for finite element models

    Get PDF
    The Multilevel Monte Carlo (MLMC) method has proven to be an effective variance-reduction statistical method for Uncertainty Quantification (UQ) in Partial Differential Equation (PDE) models, combining model computations at different levels to create an accurate estimate. Still, the computational complexity of the resulting method is extremely high, particularly for 3D models, which requires advanced algorithms for the efficient exploitation of High Performance Computing (HPC). In this article we present a new implementation of the MLMC in massively parallel computer architectures, exploiting parallelism within and between each level of the hierarchy. The numerical approximation of the PDE is performed using the finite element method but the algorithm is quite general and could be applied to other discretization methods. The two key ingredients of the implementation are a good processor partition scheme together with a good scheduling algorithm to assign work to different processors. We introduce a multiple partition of the set of processors that permits the simultaneous execution of different levels and we develop a dynamic scheduling algorithm to exploit it. The problem of finding the optimal scheduling of distributed tasks in a parallel computer is an NP-complete problem. We propose and analyze a new greedy scheduling algorithm to assign samples and we show that it is a 2-approximation, which is the best that may be expected under general assumptions. On top of this result we design a distributed memory implementation using the Message Passing Interface (MPI) standard. Finally we present a set of numerical experiments illustrating its scalability properties.Peer ReviewedPostprint (published version
    corecore