499 research outputs found

    On local Fourier analysis of multigrid methods for PDEs with jumping and random coefficients

    Get PDF
    In this paper, we propose a novel non-standard Local Fourier Analysis (LFA) variant for accurately predicting the multigrid convergence of problems with random and jumping coefficients. This LFA method is based on a specific basis of the Fourier space rather than the commonly used Fourier modes. To show the utility of this analysis, we consider, as an example, a simple cell-centered multigrid method for solving a steady-state single phase flow problem in a random porous medium. We successfully demonstrate the prediction capability of the proposed LFA using a number of challenging benchmark problems. The information provided by this analysis helps us to estimate a-priori the time needed for solving certain uncertainty quantification problems by means of a multigrid multilevel Monte Carlo method

    Multilevel Sparse Grid Methods for Elliptic Partial Differential Equations with Random Coefficients

    Full text link
    Stochastic sampling methods are arguably the most direct and least intrusive means of incorporating parametric uncertainty into numerical simulations of partial differential equations with random inputs. However, to achieve an overall error that is within a desired tolerance, a large number of sample simulations may be required (to control the sampling error), each of which may need to be run at high levels of spatial fidelity (to control the spatial error). Multilevel sampling methods aim to achieve the same accuracy as traditional sampling methods, but at a reduced computational cost, through the use of a hierarchy of spatial discretization models. Multilevel algorithms coordinate the number of samples needed at each discretization level by minimizing the computational cost, subject to a given error tolerance. They can be applied to a variety of sampling schemes, exploit nesting when available, can be implemented in parallel and can be used to inform adaptive spatial refinement strategies. We extend the multilevel sampling algorithm to sparse grid stochastic collocation methods, discuss its numerical implementation and demonstrate its efficiency both theoretically and by means of numerical examples

    Multi-index Stochastic Collocation convergence rates for random PDEs with parametric regularity

    Full text link
    We analyze the recent Multi-index Stochastic Collocation (MISC) method for computing statistics of the solution of a partial differential equation (PDEs) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data and, naturally, the error analysis uses the joint regularity of the solution with respect to both the variables in the physical domain and parametric variables. In MISC, the number of problem solutions performed at each discretization level is not determined by balancing the spatial and stochastic components of the error, but rather by suitably extending the knapsack-problem approach employed in the construction of the quasi-optimal sparse-grids and Multi-index Monte Carlo methods. We use a greedy optimization procedure to select the most effective mixed differences to include in the MISC estimator. We apply our theoretical estimates to a linear elliptic PDEs in which the log-diffusion coefficient is modeled as a random field, with a covariance similar to a Mat\'ern model, whose realizations have spatial regularity determined by a scalar parameter. We conduct a complexity analysis based on a summability argument showing algebraic rates of convergence with respect to the overall computational work. The rate of convergence depends on the smoothness parameter, the physical dimensionality and the efficiency of the linear solver. Numerical experiments show the effectiveness of MISC in this infinite-dimensional setting compared with the Multi-index Monte Carlo method and compare the convergence rate against the rates predicted in our theoretical analysis

    Multi-Index Monte Carlo: When Sparsity Meets Sampling

    Full text link
    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles's seminal work, we use in MIMC high-order mixed differences instead of using first-order differences as in MLMC to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles's MLMC analysis and which increase the domain of the problem parameters for which we achieve the optimal convergence, O(TOLβˆ’2).\mathcal{O}(\text{TOL}^{-2}). Moreover, in MIMC, the rate of increase of required memory with respect to TOL\text{TOL} is independent of the number of directions up to a logarithmic term which allows far more accurate solutions to be calculated for higher dimensions than what is possible when using MLMC. We motivate the setting of MIMC by first focusing on a simple full tensor index set. We then propose a systematic construction of optimal sets of indices for MIMC based on properly defined profits that in turn depend on the average cost per sample and the corresponding weak error and variance. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be the total degree (TD) type. In some cases, using optimal index sets, MIMC achieves a better rate for the computational complexity than the corresponding rate when using full tensor index sets..
    • …
    corecore