21 research outputs found

    Multilevel Methods for Uncertainty Quantification of Elliptic PDEs with Random Anisotropic Diffusion

    Get PDF
    We consider elliptic diffusion problems with a random anisotropic diffusion coefficient, where, in a notable direction given by a random vector field, the diffusion strength differs from the diffusion strength perpendicular to this notable direction. The Karhunen-Lo\`eve expansion then yields a parametrisation of the random vector field and, therefore, also of the solution of the elliptic diffusion problem. We show that, given regularity of the elliptic diffusion problem, the decay of the Karhunen-Lo\`eve expansion entirely determines the regularity of the solution's dependence on the random parameter, also when considering this higher spatial regularity. This result then implies that multilevel collocation and multilevel quadrature methods may be used to lessen the computation complexity when approximating quantities of interest, like the solution's mean or its second moment, while still yielding the expected rates of convergence. Numerical examples in three spatial dimensions are provided to validate the presented theory

    Robust Optimization of PDEs with Random Coefficients Using a Multilevel Monte Carlo Method

    Full text link
    This paper addresses optimization problems constrained by partial differential equations with uncertain coefficients. In particular, the robust control problem and the average control problem are considered for a tracking type cost functional with an additional penalty on the variance of the state. The expressions for the gradient and Hessian corresponding to either problem contain expected value operators. Due to the large number of uncertainties considered in our model, we suggest to evaluate these expectations using a multilevel Monte Carlo (MLMC) method. Under mild assumptions, it is shown that this results in the gradient and Hessian corresponding to the MLMC estimator of the original cost functional. Furthermore, we show that the use of certain correlated samples yields a reduction in the total number of samples required. Two optimization methods are investigated: the nonlinear conjugate gradient method and the Newton method. For both, a specific algorithm is provided that dynamically decides which and how many samples should be taken in each iteration. The cost of the optimization up to some specified tolerance τ\tau is shown to be proportional to the cost of a gradient evaluation with requested root mean square error τ\tau. The algorithms are tested on a model elliptic diffusion problem with lognormal diffusion coefficient. An additional nonlinear term is also considered.Comment: This work was presented at the IMG 2016 conference (Dec 5 - Dec 9, 2016), at the Copper Mountain conference (Mar 26 - Mar 30, 2017), and at the FrontUQ conference (Sept 5 - Sept 8, 2017

    Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format

    Full text link
    We apply the Tensor Train (TT) decomposition to construct the tensor product Polynomial Chaos Expansion (PCE) of a random field, to solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization, and to compute some quantities of interest (mean, variance, exceedance probabilities). We assume that the random diffusion coefficient is given as a smooth transformation of a Gaussian random field. In this case, the PCE is delivered by a complicated formula, which lacks an analytic TT representation. To construct its TT approximation numerically, we develop the new block TT cross algorithm, a method that computes the whole TT decomposition from a few evaluations of the PCE formula. The new method is conceptually similar to the adaptive cross approximation in the TT format, but is more efficient when several tensors must be stored in the same TT representation, which is the case for the PCE. Besides, we demonstrate how to assemble the stochastic Galerkin matrix and to compute the solution of the elliptic equation and its post-processing, staying in the TT format. We compare our technique with the traditional sparse polynomial chaos and the Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial degree is bounded for each random variable independently. This provides higher accuracy than the sparse polynomial set or the Monte Carlo method, but the cardinality of the tensor product set grows exponentially with the number of random variables. However, when the PCE coefficients are implicitly approximated in the TT format, the computations with the full tensor product polynomial set become possible. In the numerical experiments, we confirm that the new methodology is competitive in a wide range of parameters, especially where high accuracy and high polynomial degrees are required.Comment: This is a major revision of the manuscript arXiv:1406.2816 with significantly extended numerical experiments. Some unused material is remove
    corecore