70 research outputs found

    Randomized least-squares with minimal oversampling and interpolation in general spaces

    Full text link
    In approximation of functions based on point values, least-squares methods provide more stability than interpolation, at the expense of increasing the sampling budget. We show that near-optimal approximation error can nevertheless be achieved, in an expected L2L^2 sense, as soon as the sample size mm is larger than the dimension nn of the approximation space by a constant ratio. On the other hand, for m=nm=n, we obtain an interpolation strategy with a stability factor of order nn. The proposed sampling algorithms are greedy procedures based on arXiv:0808.0163 and arXiv:1508.03261, with polynomial computational complexity.Comment: 17 page

    Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs

    Get PDF
    The numerical approximation of parametric partial differential equations D(u,y)=0 is a computational challenge when the dimension d of of the parameter vector y is large, due to the so-called curse of dimensionality. It was recently shown that, for a certain class of elliptic PDEs with diffusion coefficients depending on the parameters in an affine manner, there exist polynomial approximations to the solution map u -> u(y) with an algebraic convergence rate that is independent of the parametric dimension d. The analysis used, however, the affine parameter dependence of the operator. The present paper proposes a strategy for establishing similar results for some classes parametric PDEs that do not necessarily fall in this category. Our approach is based on building an analytic extension z->u(z) of the solution map on certain tensor product of ellipses in the complex domain, and using this extension to estimate the Legendre coefficients of u. The varying radii of the ellipses in each coordinate zj reflect the anisotropy of the solution map with respect to the corresponding parametric variables yj. This allows us to derive algebraic convergence rates for tensorized Legendre expansions in the case where d is infinite. We also show that such rates are preserved when using certain interpolation procedures, which is an instance of a non-intrusive method. As examples of parametric PDE's that are covered by this approach, we consider (i) elliptic diffusion equations with coefficients that depend on the parameter vector y in a not necessarily affine manner, (ii) parabolic diffusion equations with similar dependence of the coefficient on y, (iii) nonlinear, monotone parametric elliptic PDE's, and (iv) elliptic equations set on a domain that is parametrized by the vector y. We give general strategies that allows us to derive the analytic extension in a unified abstract way for all these examples, in particular based on the holomorphic version of the implicit function theorem in Banach spaces. We expect that this approach can be applied to a large variety of parametric PDEs, showing that the curse of dimensionality can be overcome under mild assumptions

    Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs

    Get PDF
    The numerical approximation of parametric partial differential equations is a computational challenge, in particular when the number of involved parameter is large. This paper considers a model class of second order, linear, parametric, elliptic PDEs on a bounded domain D with diffusion coefficients depending on the parameters in an affine manner. For such models, it was shown in [

    A comparative study between kriging and adaptive sparse tensor-product methods for multi-dimensional approximation problems in aerodynamics design

    Get PDF
    International audienceThe performances of two multivariate interpolation procedures are compared using functions that are either synthetic or coming from a shape optimization problem in aerodynamics. The aim is to evaluate the efficiency of adaptive sparse interpolation algorithms [2] and compare them with the kriging approach developed for the design and analysis of computer experiment (DACE) [21]. The accuracy and computational time of the two methods are examined as the number N of samples used in the interpolation increases. It appears in our test cases that both methods perform equivalently, in terms of precision. However, as the dimension d increases, the computational time involved in the enrichement of the kriging sample becomes intractable for large values of N. This problem is circumvented in the case of the sparse interpolation procedure for which the computational time scales linearly with N and d.Nous comparons les performances de deux méthodes d'interpolation en grande dimension, aussi bien sur des fonctions synthétiques que pour celles issues d'un problème d'optimisation de forme en aérodynamique. L'objectif est d'évaluer l'efficacité d'algorithmes adaptatifs d'interpolation parcimonieuse [2], et de les comparer avec l'approche du kriging développée dans le cadre design and analysis of computer experiment (DACE) [21]. La précision et le temps de calcul des deux méthodes sont étudiés lorsque le nombre N d'échantillons utilisés pour l'interpolation augmente. Les cas tests montrent que les deux méthodes sont comparables en terme de précision. Cependant, lorsque la dimension d augmente, le temps de calcul associé a l'enrichissement de l'échantillon pour le kriging devient prohibitif pour les grandes valeurs de N. Ce problème est contourné dans le cas de l'interpolation parcimonieuse pour lequel le temps de calcul est linéaire en N et d

    Sparse approximation of multivariate functions from small datasets via weighted orthogonal matching pursuit

    Full text link
    We show the potential of greedy recovery strategies for the sparse approximation of multivariate functions from a small dataset of pointwise evaluations by considering an extension of the orthogonal matching pursuit to the setting of weighted sparsity. The proposed recovery strategy is based on a formal derivation of the greedy index selection rule. Numerical experiments show that the proposed weighted orthogonal matching pursuit algorithm is able to reach accuracy levels similar to those of weighted 1\ell^1 minimization programs while considerably improving the computational efficiency for small values of the sparsity level

    High-Dimensional Adaptive Sparse Polynomial Interpolation and Applications to Parametric PDEs

    Get PDF
    We consider the problem of Lagrange polynomial interpolation in high or countably infinite dimension, motivated by the fast computation of solutions to partial differential equations (PDEs) depending on a possibly large number of parameters which result from the application of generalised polynomial chaos discretisations to random and stochastic PDEs. In such applications there is a substantial advantage in considering polynomial spaces that are sparse and anisotropic with respect to the different parametric variables. In an adaptive context, the polynomial space is enriched at different stages of the computation. In this paper, we study an interpolation technique in which the sample set is incremented as the polynomial dimension increases, leading therefore to a minimal amount of PDE solving. This construction is based on the standard principle of tensorisation of a one-dimensional interpolation scheme and sparsification. We derive bounds on the Lebesgue constants for this interpolation process in terms of their univariate counterpart. For a class of model elliptic parametric PDE's, we have shown in Chkifa etal. (Modél. Math. Anal. Numér. 47(1):253-280, 2013) that certain polynomial approximations based on Taylor expansions converge in terms of the polynomial dimension with an algebraic rate that is robust with respect to the parametric dimension. We show that this rate is preserved when using our interpolation algorithm. We also propose a greedy algorithm for the adaptive selection of the polynomial spaces based on our interpolation scheme, and illustrate its performance both on scalar valued functions and on parametric elliptic PDE's

    Sparse Deterministic Approximation of Bayesian Inverse Problems

    Get PDF
    We present a parametric deterministic formulation of Bayesian inverse problems with input parameter from infinite dimensional, separable Banach spaces. In this formulation, the forward problems are parametric, deterministic elliptic partial differential equations, and the inverse problem is to determine the unknown, parametric deterministic coefficients from noisy observations comprising linear functionals of the solution. We prove a generalized polynomial chaos representation of the posterior density with respect to the prior measure, given noisy observational data. We analyze the sparsity of the posterior density in terms of the summability of the input data's coefficient sequence. To this end, we estimate the fluctuations in the prior. We exhibit sufficient conditions on the prior model in order for approximations of the posterior density to converge at a given algebraic rate, in terms of the number NN of unknowns appearing in the parameteric representation of the prior measure. Similar sparsity and approximation results are also exhibited for the solution and covariance of the elliptic partial differential equation under the posterior. These results then form the basis for efficient uncertainty quantification, in the presence of data with noise

    A Dimension-Adaptive Multi-Index Monte Carlo Method Applied to a Model of a Heat Exchanger

    Full text link
    We present an adaptive version of the Multi-Index Monte Carlo method, introduced by Haji-Ali, Nobile and Tempone (2016), for simulating PDEs with coefficients that are random fields. A classical technique for sampling from these random fields is the Karhunen-Lo\`eve expansion. Our adaptive algorithm is based on the adaptive algorithm used in sparse grid cubature as introduced by Gerstner and Griebel (2003), and automatically chooses the number of terms needed in this expansion, as well as the required spatial discretizations of the PDE model. We apply the method to a simplified model of a heat exchanger with random insulator material, where the stochastic characteristics are modeled as a lognormal random field, and we show consistent computational savings
    corecore