525 research outputs found

    Meta-models for structural reliability and uncertainty quantification

    Get PDF
    A meta-model (or a surrogate model) is the modern name for what was traditionally called a response surface. It is intended to mimic the behaviour of a computational model M (e.g. a finite element model in mechanics) while being inexpensive to evaluate, in contrast to the original model which may take hours or even days of computer processing time. In this paper various types of meta-models that have been used in the last decade in the context of structural reliability are reviewed. More specifically classical polynomial response surfaces, polynomial chaos expansions and kriging are addressed. It is shown how the need for error estimates and adaptivity in their construction has brought this type of approaches to a high level of efficiency. A new technique that solves the problem of the potential biasedness in the estimation of a probability of failure through the use of meta-models is finally presented.Comment: Keynote lecture Fifth Asian-Pacific Symposium on Structural Reliability and its Applications (5th APSSRA) May 2012, Singapor

    Polynomial-Chaos-based Kriging

    Full text link
    Computer simulation has become the standard tool in many engineering fields for designing and optimizing systems, as well as for assessing their reliability. To cope with demanding analysis such as optimization and reliability, surrogate models (a.k.a meta-models) have been increasingly investigated in the last decade. Polynomial Chaos Expansions (PCE) and Kriging are two popular non-intrusive meta-modelling techniques. PCE surrogates the computational model with a series of orthonormal polynomials in the input variables where polynomials are chosen in coherency with the probability distributions of those input variables. On the other hand, Kriging assumes that the computer model behaves as a realization of a Gaussian random process whose parameters are estimated from the available computer runs, i.e. input vectors and response values. These two techniques have been developed more or less in parallel so far with little interaction between the researchers in the two fields. In this paper, PC-Kriging is derived as a new non-intrusive meta-modeling approach combining PCE and Kriging. A sparse set of orthonormal polynomials (PCE) approximates the global behavior of the computational model whereas Kriging manages the local variability of the model output. An adaptive algorithm similar to the least angle regression algorithm determines the optimal sparse set of polynomials. PC-Kriging is validated on various benchmark analytical functions which are easy to sample for reference results. From the numerical investigations it is concluded that PC-Kriging performs better than or at least as good as the two distinct meta-modeling techniques. A larger gain in accuracy is obtained when the experimental design has a limited size, which is an asset when dealing with demanding computational models

    On Bayesian Search for the Feasible Space Under Computationally Expensive Constraints

    Get PDF
    We are often interested in identifying the feasible subset of a decision space under multiple constraints to permit effective design exploration. If determining feasibility required computationally expensive simulations, the cost of exploration would be prohibitive. Bayesian search is data-efficient for such problems: starting from a small dataset, the central concept is to use Bayesian models of constraints with an acquisition function to locate promising solutions that may improve predictions of feasibility when the dataset is augmented. At the end of this sequential active learning approach with a limited number of expensive evaluations, the models can accurately predict the feasibility of any solution obviating the need for full simulations. In this paper, we propose a novel acquisition function that combines the probability that a solution lies at the boundary between feasible and infeasible spaces (representing exploitation) and the entropy in predictions (representing exploration). Experiments confirmed the efficacy of the proposed function

    Efficient time-dependent system reliability analysis

    Get PDF
    Engineering systems are usually subjected to time-variant loads and operate under time-dependent uncertainty; system performances are therefore time-dependent. Accurate and efficient estimate of system reliability is crucial for decision makings on system design, lifetime cost estimate, maintenance strategy, etc. Although significant progresses have been made in time-independent reliability analysis for components and systems, time-dependent system reliability methodologies are still limited. This dissertation is motivated by the need of accurate and effective reliability prediction for engineering systems under time-dependent uncertainty. Based on the classic First and Second Order Reliability Method (FORM and SORM), a system reliability method is developed for multidisciplinary systems involving stationary stochastic processes. A dependent Kriging method is also developed for general components. This method accounts for dependent responses from surrogate models and is therefore more accurate than existing Kriging Monte Carlo simulation methods that neglect the dependence between responses. The extension of the dependent Kriging method to systems is also a contribution of this dissertation. To overcome the difficulty of obtaining extreme value distributions and get rid of global optimization with a double-loop procedure, a Kriging surrogate modeling method is also proposed. This method provides a new perspective of surrogate modeling for time-dependent systems and is applicable to general systems having random variables, time, and stochastic processes. The proposed methods are evaluated through a wide range of engineering systems, including a compound cylinders system, a liquid hydrogen fuel tank, function generator mechanisms, slider-crank mechanisms, and a Daniels system --Abstract, page iv

    Bayesian subset simulation

    Full text link
    We consider the problem of estimating a probability of failure α\alpha, defined as the volume of the excursion set of a function f:XRdRf:\mathbb{X} \subseteq \mathbb{R}^{d} \to \mathbb{R} above a given threshold, under a given probability measure on X\mathbb{X}. In this article, we combine the popular subset simulation algorithm (Au and Beck, Probab. Eng. Mech. 2001) and our sequential Bayesian approach for the estimation of a probability of failure (Bect, Ginsbourger, Li, Picheny and Vazquez, Stat. Comput. 2012). This makes it possible to estimate α\alpha when the number of evaluations of ff is very limited and α\alpha is very small. The resulting algorithm is called Bayesian subset simulation (BSS). A key idea, as in the subset simulation algorithm, is to estimate the probabilities of a sequence of excursion sets of ff above intermediate thresholds, using a sequential Monte Carlo (SMC) approach. A Gaussian process prior on ff is used to define the sequence of densities targeted by the SMC algorithm, and drive the selection of evaluation points of ff to estimate the intermediate probabilities. Adaptive procedures are proposed to determine the intermediate thresholds and the number of evaluations to be carried out at each stage of the algorithm. Numerical experiments illustrate that BSS achieves significant savings in the number of function evaluations with respect to other Monte Carlo approaches

    Time- and space-dependent uncertainty analysis and its application in lunar plasma environment modeling

    Get PDF
    ”During an engineering system design, engineers usually encounter uncertainties that ubiquitously exist, such as material properties, dimensions of components, and random loads. Some of these parameters do not change with time or space and hence are time- and space-independent. However, in many engineering applications, the more general time- and space-dependent uncertainty is frequently encountered. Consequently, the system exhibits random time- and space-dependent behaviors, which may result in a higher probability of failure, lower average lifetime, and/or worse robustness. Therefore, it is critical to quantify uncertainty and predict how the system behaves under time- and space- dependent uncertainty. The objective of this study is to develop accurate and efficient methods for uncertainty analysis. This study contains five works. In the first work, an accurate method based on the series expansion, Gauss-Hermite quadrature, and saddle point approximation is developed to calculate high-dimensional normal probabilities. Then the method is applied to estimate time-dependent reliability. In the second work, we develop an adaptive Kriging method to estimate product average lifetime. In the third work, a time- and space-dependent reliability analysis method based on the first-order and second-order methods is proposed. In the fourth work, we extend the existing robustness analysis to time- and space-dependent problems and develop an adaptive Kriging method to evaluate the time- and space-dependent robustness. In the fifth work, we develop an adaptive Kriging method to efficiently estimate the lower and upper bounds of the electric potentials of the photoelectron sheaths near the lunar surface”--Abstract, page iv

    Active Learning-based Domain Adaptive Localized Polynomial Chaos Expansion

    Full text link
    The paper presents a novel methodology to build surrogate models of complicated functions by an active learning-based sequential decomposition of the input random space and construction of localized polynomial chaos expansions, referred to as domain adaptive localized polynomial chaos expansion (DAL-PCE). The approach utilizes sequential decomposition of the input random space into smaller sub-domains approximated by low-order polynomial expansions. This allows approximation of functions with strong nonlinearties, discontinuities, and/or singularities. Decomposition of the input random space and local approximations alleviates the Gibbs phenomenon for these types of problems and confines error to a very small vicinity near the non-linearity. The global behavior of the surrogate model is therefore significantly better than existing methods as shown in numerical examples. The whole process is driven by an active learning routine that uses the recently proposed Θ\Theta criterion to assess local variance contributions. The proposed approach balances both \emph{exploitation} of the surrogate model and \emph{exploration} of the input random space and thus leads to efficient and accurate approximation of the original mathematical model. The numerical results show the superiority of the DAL-PCE in comparison to (i) a single global polynomial chaos expansion and (ii) the recently proposed stochastic spectral embedding (SSE) method developed as an accurate surrogate model and which is based on a similar domain decomposition process. This method represents general framework upon which further extensions and refinements can be based, and which can be combined with any technique for non-intrusive polynomial chaos expansion construction
    corecore