7 research outputs found

    Determination of Bootstrap confidence intervals on sensitivity indices obtained by polynomial chaos expansion

    Get PDF
    L’analyse de sensibilitĂ© a pour but d’évaluer l’influence de la variabilitĂ© d’un ou plusieurs paramĂštres d’entrĂ©e d’un modĂšle sur la variabilitĂ© d’une ou plusieurs rĂ©ponses. Parmi toutes les mĂ©thodes d’approximations, le dĂ©veloppement sur une base de chaos polynĂŽmial est une des plus efficace pour le calcul des indices de sensibilitĂ©, car ils sont obtenus analytiquement grĂące aux coefficients de la dĂ©composition (Sudret (2008)). Les indices sont donc approximĂ©s et il est difficile d’évaluer l’erreur due Ă  cette approximation. Afin d’évaluer la confiance que l’on peut leur accorder nous proposons de construire des intervalles de confiance par rĂ©-Ă©chantillonnage Bootstrap (Efron, Tibshirani (1993)) sur le plan d’expĂ©rience utilisĂ© pour construire l’approximation par chaos polynĂŽmial. L’utilisation de ces intervalles de confiance permet de trouver un plan d’expĂ©rience optimal garantissant le calcul des indices de sensibilitĂ© avec une prĂ©cision donnĂ©e

    Global sensitivity analysis in the limited data setting with application to char combustion

    Full text link
    In uncertainty quantification, variance-based global sensitivity analysis quantitatively determines the effect of each input random variable on the output by partitioning the total output variance into contributions from each input. However, computing conditional expectations can be prohibitively costly when working with expensive-to-evaluate models. Surrogate models can accelerate this, yet their accuracy depends on the quality and quantity of training data, which is expensive to generate (experimentally or computationally) for complex engineering systems. Thus, methods that work with limited data are desirable. We propose a diffeomorphic modulation under observable response preserving homotopy (D-MORPH) regression to train a polynomial dimensional decomposition surrogate of the output that minimizes the number of training data. The new method first computes a sparse Lasso solution and uses it to define the cost function. A subsequent D-MORPH regression minimizes the difference between the D-MORPH and Lasso solution. The resulting D-MORPH surrogate is more robust to input variations and more accurate with limited training data. We illustrate the accuracy and computational efficiency of the new surrogate for global sensitivity analysis using mathematical functions and an expensive-to-simulate model of char combustion. The new method is highly efficient, requiring only 15% of the training data compared to conventional regression.Comment: 26 pages, 11 figure

    High-Dimensional Reliability Method Accounting for Important and Unimportant Input Variables

    Get PDF
    Reliability analysis is a core element in engineering design and can be performed with physical models (limit-state functions). Reliability analysis becomes computationally expensive when the dimensionality of input random variables is high. This work develops a high-dimensional reliability analysis method through a new dimension reduction strategy so that the contributions of unimportant input variables are also accommodated after dimension reduction. Dimension reduction is performed with the first iteration of the first-order reliability method (FORM), which identifies important and unimportant input variables. Then a higher order reliability analysis is performed in the reduced space of only important input variables. The reliability obtained in the reduced space is then integrated with the contributions of unimportant input variables, resulting in the final reliability prediction that accounts for both types of input variables. Consequently, the new reliability method is more accurate than the traditional method which fixes unimportant input variables at their means. The accuracy is demonstrated by three examples

    Active learning with generalized sliced inverse regression for high-dimensional reliability analysis

    Get PDF
    It is computationally expensive to predict reliability using physical models at the design stage if many random input variables exist. This work introduces a dimension reduction technique based on generalized sliced inverse regression (GSIR) to mitigate the curse of dimensionality. The proposed high dimensional reliability method enables active learning to integrate GSIR, Gaussian Process (GP) modeling, and Importance Sampling (IS), resulting in an accurate reliability prediction at a reduced computational cost. The new method consists of three core steps, 1) identification of the importance sampling region, 2) dimension reduction by GSIR to produce a sufficient predictor, and 3) construction of a GP model for the true response with respect to the sufficient predictor in the reduced-dimension space. High accuracy and efficiency are achieved with active learning that is iteratively executed with the above three steps by adding new training points one by one in the region with a high chance of failure

    Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluids simulation

    Get PDF
    The polynomial dimensional decomposition (PDD) is employed in this work for theglobal sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to amoderate to large number of input random variables. Due to the intimate structure between thePDD and the Analysis of Variance (ANOVA) approach, PDD is able to provide a simpler and moredirect evaluation of the Sobol’ sensitivity indices, when compared to the Polynomial Chaos expansion(PC). Unfortunately, the number of PDD terms grows exponentially with respect to the sizeof the input random vector, which makes the computational cost of standard methods unaffordablefor real engineering applications. In order to address the problem of the curse of dimensionality, thiswork proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model(i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed byregression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionalityfor ANOVA component functions, 2) the active dimension technique especially for second- andhigher-order parameter interactions, and 3) the stepwise regression approach designed to retainonly the most influential polynomials in the PDD expansion. During this adaptive procedure featuringstepwise regressions, the surrogate model representation keeps containing few terms, so thatthe cost to resolve repeatedly the linear systems of the least-square regression problem is negligible.The size of the finally obtained sparse PDD representation is much smaller than the one of the fullexpansion, since only significant terms are eventually retained. Consequently, a much less numberof calls to the deterministic model is required to compute the final PDD coefficients

    Efficient Computational Methods for Structural Reliability and Global Sensitivity Analyses

    Get PDF
    Uncertainty analysis of a system response is an important part of engineering probabilistic analysis. Uncertainty analysis includes: (a) to evaluate moments of the response; (b) to evaluate reliability analysis of the system; (c) to assess the complete probability distribution of the response; (d) to conduct the parametric sensitivity analysis of the output. The actual model of system response is usually a high-dimensional function of input variables. Although Monte Carlo simulation is a quite general approach for this purpose, it may require an inordinate amount of resources to achieve an acceptable level of accuracy. Development of a computationally efficient method, hence, is of great importance. First of all, the study proposed a moment method for uncertainty quantification of structural systems. However, a key departure is the use of fractional moment of response function, as opposed to integer moment used so far in literature. The advantage of using fractional moment over integer moment was illustrated from the relation of one fractional moment with a couple of integer moments. With a small number of samples to compute the fractional moments, a system output distribution was estimated with the principle of maximum entropy (MaxEnt) in conjunction with the constraints specified in terms of fractional moments. Compared to the classical MaxEnt, a novel feature of the proposed method is that fractional exponent of the MaxEnt distribution is determined through the entropy maximization process, instead of assigned by an analyst in prior. To further minimize the computational cost of the simulation-based entropy method, a multiplicative dimensional reduction method (M-DRM) was proposed to compute the fractional (integer) moments of a generic function with multiple input variables. The M-DRM can accurately approximate a high-dimensional function as the product of a series low-dimensional functions. Together with the principle of maximum entropy, a novel computational approach was proposed to assess the complete probability distribution of a system output. Accuracy and efficiency of the proposed method for structural reliability analysis were verified by crude Monte Carlo simulation of several examples. Application of M-DRM was further extended to the variance-based global sensitivity analysis of a system. Compared to the local sensitivity analysis, the variance-based sensitivity index can provide significance information about an input random variable. Since each component variance is defined as a conditional expectation with respect to the system model function, the separable nature of the M-DRM approximation can simplify the high-dimension integrations in sensitivity analysis. Several examples were presented to illustrate the numerical accuracy and efficiency of the proposed method in comparison to the Monte Carlo simulation method. The last contribution of the proposed study is the development of a computationally efficient method for polynomial chaos expansion (PCE) of a system's response. This PCE model can be later used uncertainty analysis. However, evaluation of coefficients of a PCE meta-model is computational demanding task due to the involved high-dimensional integrations. With the proposed M-DRM, the involved computational cost can be remarkably reduced compared to the classical methods in literature (simulation method or tensor Gauss quadrature method). Accuracy and efficiency of the proposed method for polynomial chaos expansion were verified by considering several practical examples.1 yea
    corecore