1,774 research outputs found

    High-Dimensional Adaptive Sparse Polynomial Interpolation and Applications to Parametric PDEs

    Get PDF
    We consider the problem of Lagrange polynomial interpolation in high or countably infinite dimension, motivated by the fast computation of solutions to partial differential equations (PDEs) depending on a possibly large number of parameters which result from the application of generalised polynomial chaos discretisations to random and stochastic PDEs. In such applications there is a substantial advantage in considering polynomial spaces that are sparse and anisotropic with respect to the different parametric variables. In an adaptive context, the polynomial space is enriched at different stages of the computation. In this paper, we study an interpolation technique in which the sample set is incremented as the polynomial dimension increases, leading therefore to a minimal amount of PDE solving. This construction is based on the standard principle of tensorisation of a one-dimensional interpolation scheme and sparsification. We derive bounds on the Lebesgue constants for this interpolation process in terms of their univariate counterpart. For a class of model elliptic parametric PDE's, we have shown in Chkifa etal. (Modél. Math. Anal. Numér. 47(1):253-280, 2013) that certain polynomial approximations based on Taylor expansions converge in terms of the polynomial dimension with an algebraic rate that is robust with respect to the parametric dimension. We show that this rate is preserved when using our interpolation algorithm. We also propose a greedy algorithm for the adaptive selection of the polynomial spaces based on our interpolation scheme, and illustrate its performance both on scalar valued functions and on parametric elliptic PDE's

    Stochastic methods for solving high-dimensional partial differential equations

    Full text link
    We propose algorithms for solving high-dimensional Partial Differential Equations (PDEs) that combine a probabilistic interpretation of PDEs, through Feynman-Kac representation, with sparse interpolation. Monte-Carlo methods and time-integration schemes are used to estimate pointwise evaluations of the solution of a PDE. We use a sequential control variates algorithm, where control variates are constructed based on successive approximations of the solution of the PDE. Two different algorithms are proposed, combining in different ways the sequential control variates algorithm and adaptive sparse interpolation. Numerical examples will illustrate the behavior of these algorithms

    Sparse-grid polynomial interpolation approximation and integration for parametric and stochastic elliptic PDEs with lognormal inputs

    Full text link
    By combining a certain approximation property in the spatial domain, and weighted ℓ2\ell_2-summability of the Hermite polynomial expansion coefficients in the parametric domain obtained in [M. Bachmayr, A. Cohen, R. DeVore and G. Migliorati, ESAIM Math. Model. Numer. Anal. 51\bf 51(2017), 341-363] and [M. Bachmayr, A. Cohen, D. D\~ung and C. Schwab, SIAM J. Numer. Anal. 55\bf 55(2017), 2151-2186], we investigate linear non-adaptive methods of fully discrete polynomial interpolation approximation as well as fully discrete weighted quadrature methods of integration for parametric and stochastic elliptic PDEs with lognormal inputs. We explicitly construct such methods and prove corresponding convergence rates in nn of the approximations by them, where nn is a number characterizing computation complexity. The linear non-adaptive methods of fully discrete polynomial interpolation approximation are sparse-grid collocation methods. Moreover, they generate in a natural way discrete weighted quadrature formulas for integration of the solution to parametric and stochastic elliptic PDEs and its linear functionals, and the error of the corresponding integration can be estimated via the error in the Bochner space L1(R∞,V,γ)L_1({\mathbb R}^\infty,V,\gamma) norm of the generating methods where γ\gamma is the Gaussian probability measure on R∞{\mathbb R}^\infty and VV is the energy space. We also briefly consider similar problems for parametric and stochastic elliptic PDEs with affine inputs, and by-product problems of non-fully discrete polynomial interpolation approximation and integration. In particular, the convergence rate of non-fully discrete obtained in this paper improves the known one

    A mixed â„“1\ell_1 regularization approach for sparse simultaneous approximation of parameterized PDEs

    Full text link
    We present and analyze a novel sparse polynomial technique for the simultaneous approximation of parameterized partial differential equations (PDEs) with deterministic and stochastic inputs. Our approach treats the numerical solution as a jointly sparse reconstruction problem through the reformulation of the standard basis pursuit denoising, where the set of jointly sparse vectors is infinite. To achieve global reconstruction of sparse solutions to parameterized elliptic PDEs over both physical and parametric domains, we combine the standard measurement scheme developed for compressed sensing in the context of bounded orthonormal systems with a novel mixed-norm based â„“1\ell_1 regularization method that exploits both energy and sparsity. In addition, we are able to prove that, with minimal sample complexity, error estimates comparable to the best ss-term and quasi-optimal approximations are achievable, while requiring only a priori bounds on polynomial truncation error with respect to the energy norm. Finally, we perform extensive numerical experiments on several high-dimensional parameterized elliptic PDE models to demonstrate the superior recovery properties of the proposed approach.Comment: 23 pages, 4 figure

    A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions

    Full text link
    In this work we develop a dynamically adaptive sparse grids (SG) method for quasi-optimal interpolation of multidimensional analytic functions defined over a product of one dimensional bounded domains. The goal of such approach is to construct an interpolant in space that corresponds to the "best MM-terms" based on sharp a priori estimate of polynomial coefficients. In the past, SG methods have been successful in achieving this, with a traditional construction that relies on the solution to a Knapsack problem: only the most profitable hierarchical surpluses are added to the SG. However, this approach requires additional sharp estimates related to the size of the analytic region and the norm of the interpolation operator, i.e., the Lebesgue constant. Instead, we present an iterative SG procedure that adaptively refines an estimate of the region and accounts for the effects of the Lebesgue constant. Our approach does not require any a priori knowledge of the analyticity or operator norm, is easily generalized to both affine and non-affine analytic functions, and can be applied to sparse grids build from one dimensional rules with arbitrary growth of the number of nodes. In several numerical examples, we utilize our dynamically adaptive SG to interpolate quantities of interest related to the solutions of parametrized elliptic and hyperbolic PDEs, and compare the performance of our quasi-optimal interpolant to several alternative SG schemes

    Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format

    Full text link
    We apply the Tensor Train (TT) decomposition to construct the tensor product Polynomial Chaos Expansion (PCE) of a random field, to solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization, and to compute some quantities of interest (mean, variance, exceedance probabilities). We assume that the random diffusion coefficient is given as a smooth transformation of a Gaussian random field. In this case, the PCE is delivered by a complicated formula, which lacks an analytic TT representation. To construct its TT approximation numerically, we develop the new block TT cross algorithm, a method that computes the whole TT decomposition from a few evaluations of the PCE formula. The new method is conceptually similar to the adaptive cross approximation in the TT format, but is more efficient when several tensors must be stored in the same TT representation, which is the case for the PCE. Besides, we demonstrate how to assemble the stochastic Galerkin matrix and to compute the solution of the elliptic equation and its post-processing, staying in the TT format. We compare our technique with the traditional sparse polynomial chaos and the Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial degree is bounded for each random variable independently. This provides higher accuracy than the sparse polynomial set or the Monte Carlo method, but the cardinality of the tensor product set grows exponentially with the number of random variables. However, when the PCE coefficients are implicitly approximated in the TT format, the computations with the full tensor product polynomial set become possible. In the numerical experiments, we confirm that the new methodology is competitive in a wide range of parameters, especially where high accuracy and high polynomial degrees are required.Comment: This is a major revision of the manuscript arXiv:1406.2816 with significantly extended numerical experiments. Some unused material is remove

    Approximation of high-dimensional parametric PDEs

    Get PDF
    Parametrized families of PDEs arise in various contexts such as inverse problems, control and optimization, risk assessment, and uncertainty quantification. In most of these applications, the number of parameters is large or perhaps even infinite. Thus, the development of numerical methods for these parametric problems is faced with the possible curse of dimensionality. This article is directed at (i) identifying and understanding which properties of parametric equations allow one to avoid this curse and (ii) developing and analyzing effective numerical methodd which fully exploit these properties and, in turn, are immune to the growth in dimensionality. The first part of this article studies the smoothness and approximability of the solution map, that is, the map a↦u(a)a\mapsto u(a) where aa is the parameter value and u(a)u(a) is the corresponding solution to the PDE. It is shown that for many relevant parametric PDEs, the parametric smoothness of this map is typically holomorphic and also highly anisotropic in that the relevant parameters are of widely varying importance in describing the solution. These two properties are then exploited to establish convergence rates of nn-term approximations to the solution map for which each term is separable in the parametric and physical variables. These results reveal that, at least on a theoretical level, the solution map can be well approximated by discretizations of moderate complexity, thereby showing how the curse of dimensionality is broken. This theoretical analysis is carried out through concepts of approximation theory such as best nn-term approximation, sparsity, and nn-widths. These notions determine a priori the best possible performance of numerical methods and thus serve as a benchmark for concrete algorithms. The second part of this article turns to the development of numerical algorithms based on the theoretically established sparse separable approximations. The numerical methods studied fall into two general categories. The first uses polynomial expansions in terms of the parameters to approximate the solution map. The second one searches for suitable low dimensional spaces for simultaneously approximating all members of the parametric family. The numerical implementation of these approaches is carried out through adaptive and greedy algorithms. An a priori analysis of the performance of these algorithms establishes how well they meet the theoretical benchmarks
    • …
    corecore