8 research outputs found

    A Massively Parallel Implementation of Multilevel Monte Carlo for Finite Element Models

    Full text link
    The Multilevel Monte Carlo (MLMC) method has proven to be an effective variance-reduction statistical method for Uncertainty Quantification (UQ) in Partial Differential Equation (PDE) models, combining model computations at different levels to create an accurate estimate. Still, the computational complexity of the resulting method is extremely high, particularly for 3D models, which requires advanced algorithms for the efficient exploitation of High Performance Computing (HPC). In this article we present a new implementation of the MLMC in massively parallel computer architectures, exploiting parallelism within and between each level of the hierarchy. The numerical approximation of the PDE is performed using the finite element method but the algorithm is quite general and could be applied to other discretization methods as well, although the focus is on parallel sampling. The two key ingredients of an efficient parallel implementation are a good processor partition scheme together with a good scheduling algorithm to assign work to different processors. We introduce a multiple partition of the set of processors that permits the simultaneous execution of different levels and we develop a dynamic scheduling algorithm to exploit it. The problem of finding the optimal scheduling of distributed tasks in a parallel computer is an NP-complete problem. We propose and analyze a new greedy scheduling algorithm to assign samples and we show that it is a 2-approximation, which is the best that may be expected under general assumptions. On top of this result we design a distributed memory implementation using the Message Passing Interface (MPI) standard. Finally we present a set of numerical experiments illustrating its scalability properties.Comment: 21 pages, 13 figure

    A Fully Parallelized and Budgeted Multi-Level Monte Carlo Method and the Application to Acoustic Waves

    Full text link
    We present a novel variant of the multi-level Monte Carlo method that effectively utilizes a reserved computational budget on a high-performance computing system to minimize the mean squared error. Our approach combines concepts of the continuation multi-level Monte Carlo method with dynamic programming techniques following Bellman's optimality principle, and a new parallelization strategy based on a single distributed data structure. Additionally, we establish a theoretical bound on the error reduction on a parallel computing cluster and provide empirical evidence that the proposed method adheres to this bound. We implement, test, and benchmark the approach on computationally demanding problems, focusing on its application to acoustic wave propagation in high-dimensional random media

    A fully parallelized and budgeted multi-level Monte Carlo method and the application to acoustic waves

    Get PDF
    We present a novel variant of the multi-level Monte Carlo method that effectively utilizes a reserved computational budget on a high-performance computing system to minimize the mean squared error. Our approach combines concepts of the continuation multi-level Monte Carlo method with dynamic programming techniques following Bellman’s optimality principle, and a new parallelization strategy based on a single distributed data structure. Additionally, we establish a theoretical bound on the error reduction on a parallel computing cluster and provide empirical evidence that the proposed method adheres to this bound. We implement, test, and benchmark the approach on computationally demanding problems, focusing on its application to acoustic wave propagation in high-dimensional random media

    Scheduling Massively Parallel Multigrid for Multilevel Monte Carlo Methods

    Get PDF
    The computational complexity of naive, sampling-based uncertainty quantification for 3D partial differential equations is extremely high. Multilevel approaches, such as multilevel Monte Carlo (MLMC), can reduce the complexity significantly when they are combined with a fast multigrid solver, but to exploit them fully in a parallel environment, sophisticated scheduling strategies are needed. We optimize the concurrent execution across the three layers of the MLMC method: parallelization across levels, across samples, and across the spatial grid. In a series of numerical tests, the influence on the overall performance of the “scalability window” of the multigrid solver (i.e., the range of processor numbers over which good parallel efficiency can be maintained) is illustrated. Different homogeneous and heterogeneous scheduling strategies are proposed and discussed. Finally, large 3D scaling experiments are carried out, including adaptivity

    A massively parallel implementation of multilevel Monte Carlo for finite element models

    Get PDF
    The Multilevel Monte Carlo (MLMC) method has proven to be an effective variance-reduction statistical method for Uncertainty Quantification (UQ) in Partial Differential Equation (PDE) models, combining model computations at different levels to create an accurate estimate. Still, the computational complexity of the resulting method is extremely high, particularly for 3D models, which requires advanced algorithms for the efficient exploitation of High Performance Computing (HPC). In this article we present a new implementation of the MLMC in massively parallel computer architectures, exploiting parallelism within and between each level of the hierarchy. The numerical approximation of the PDE is performed using the finite element method but the algorithm is quite general and could be applied to other discretization methods. The two key ingredients of the implementation are a good processor partition scheme together with a good scheduling algorithm to assign work to different processors. We introduce a multiple partition of the set of processors that permits the simultaneous execution of different levels and we develop a dynamic scheduling algorithm to exploit it. The problem of finding the optimal scheduling of distributed tasks in a parallel computer is an NP-complete problem. We propose and analyze a new greedy scheduling algorithm to assign samples and we show that it is a 2-approximation, which is the best that may be expected under general assumptions. On top of this result we design a distributed memory implementation using the Message Passing Interface (MPI) standard. Finally we present a set of numerical experiments illustrating its scalability properties.Peer ReviewedPostprint (published version

    Scheduling Massively Parallel Multigrid for Multilevel Monte Carlo Methods

    Get PDF
    The computational complexity of naive, sampling-based uncertainty quantification for 3D partial differential equations is extremely high. Multilevel approaches, such as multilevel Monte Carlo (MLMC), can reduce the complexity significantly when they are combined with a fast multigrid solver, but to exploit them fully in a parallel environment, sophisticated scheduling strategies are needed. We optimize the concurrent execution across the three layers of the MLMC method: parallelization across levels, across samples, and across the spatial grid. In a series of numerical tests, the influence on the overall performance of the “scalability window” of the multigrid solver (i.e., the range of processor numbers over which good parallel efficiency can be maintained) is illustrated. Different homogeneous and heterogeneous scheduling strategies are proposed and discussed. Finally, large 3D scaling experiments are carried out, including adaptivity

    Adaptive Load Balancing for Massively Parallel Multi-Level Monte Carlo Solvers

    No full text

    A Fully Parallelized and Budgeted Multi-level Monte Carlo Framework for Partial Differential Equations: From Mathematical Theory to Automated Large-Scale Computations

    Get PDF
    All collected data on any physical, technical or economical process is subject to uncertainty. By incorporating this uncertainty in the model and propagating it through the system, this data error can be controlled. This makes the predictions of the system more trustworthy and reliable. The multi-level Monte Carlo (MLMC) method has proven to be an effective uncertainty quantification tool, requiring little knowledge about the problem while being highly performant. In this doctoral thesis we analyse, implement, develop and apply the MLMC method to partial differential equations (PDEs) subject to high-dimensional random input data. We set up a unified framework based on the software M++ to approximate solutions to elliptic and hyperbolic PDEs with a large selection of finite element methods. We combine this setup with a new variant of the MLMC method. In particular, we propose a budgeted MLMC (BMLMC) method which is capable to optimally invest reserved computing resources in order to minimize the model error while exhausting a given computational budget. This is achieved by developing a new parallelism based on a single distributed data structure, employing ideas of the continuation MLMC method and utilizing dynamic programming techniques. The final method is theoretically motivated, analyzed, and numerically well-tested in an automated benchmarking workflow for highly challenging problems like the approximation of wave equations in randomized media
    corecore