7 research outputs found

    Error analysis of truncated expansion solutions to high-dimensional parabolic PDEs

    Get PDF
    We study an expansion method for high-dimensional parabolic PDEs which constructs accurate approximate solutions by decomposition into solutions to lower-dimensional PDEs, and which is particularly effective if there are a low number of dominant principal components. The focus of the present article is the derivation of sharp error bounds for the constant coefficient case and a first and second order approximation. We give a precise characterisation when these bounds hold for (non-smooth) option pricing applications and provide numerical results demonstrating that the practically observed convergence speed is in agreement with the theoretical predictions

    An adaptive ANOVA stochastic Galerkin method for partial differential equations with random inputs

    Full text link
    It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method

    Clustering based Multiple Anchors High-Dimensional Model Representation

    Full text link
    In this work, a cut high-dimensional model representation (cut-HDMR) expansion based on multiple anchors is constructed via the clustering method. Specifically, a set of random input realizations is drawn from the parameter space and grouped by the centroidal Voronoi tessellation (CVT) method. Then for each cluster, the centroid is set as the reference, thereby the corresponding zeroth-order term can be determined directly. While for non-zero order terms of each cut-HDMR, a set of discrete points is selected for each input component, and the Lagrange interpolation method is applied. For a new input, the cut-HDMR corresponding to the nearest centroid is used to compute its response. Numerical experiments with high-dimensional integral and elliptic stochastic partial differential equation as backgrounds show that the CVT based multiple anchors cut-HDMR can alleviate the negative impact of a single inappropriate anchor point, and has higher accuracy than the average of several expansions

    Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluids simulation

    Get PDF
    The polynomial dimensional decomposition (PDD) is employed in this work for theglobal sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to amoderate to large number of input random variables. Due to the intimate structure between thePDD and the Analysis of Variance (ANOVA) approach, PDD is able to provide a simpler and moredirect evaluation of the Sobol’ sensitivity indices, when compared to the Polynomial Chaos expansion(PC). Unfortunately, the number of PDD terms grows exponentially with respect to the sizeof the input random vector, which makes the computational cost of standard methods unaffordablefor real engineering applications. In order to address the problem of the curse of dimensionality, thiswork proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model(i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed byregression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionalityfor ANOVA component functions, 2) the active dimension technique especially for second- andhigher-order parameter interactions, and 3) the stepwise regression approach designed to retainonly the most influential polynomials in the PDD expansion. During this adaptive procedure featuringstepwise regressions, the surrogate model representation keeps containing few terms, so thatthe cost to resolve repeatedly the linear systems of the least-square regression problem is negligible.The size of the finally obtained sparse PDD representation is much smaller than the one of the fullexpansion, since only significant terms are eventually retained. Consequently, a much less numberof calls to the deterministic model is required to compute the final PDD coefficients

    ON THE APPROXIMATION ERROR IN HIGH DIMENSIONAL MODEL REPRESENTATION

    No full text
    Mathematical models are often described by multivariate functions, which are usually approximated by a sum of lower dimensional functions. A major problem is the approximation error introduced and the factors that affect it. This paper investigates the error of approximating a multivariate function by a sum of lower dimensional functions in the setting of high dimensional model representations. Two kinds of approximations are studied, namely, the approximation based on the ANOVA (analysis of variance) decomposition and the approximation based on the anchored decomposition. We prove new theorems for the expected errors of approximations based on anchored decomposition when the anchor is chosen randomly and establish the relationship of the expected approximation errors with the global sensitivity indices of Sobol’. The expected approximation error give indications on how good or how bad could be the approximation based on anchored decomposition and when the approximation is good or bad. The influence of the anchor on the goodness of approximation is studied. Methods for choosing good anchors are presented.
    corecore