6 research outputs found

    Gaussian process surrogates for failure detection: a Bayesian experimental design approach

    Full text link
    An important task of uncertainty quantification is to identify {the probability of} undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian {process} surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples

    A subset multicanonical Monte Carlo method for simulating rare failure events

    Full text link
    Estimating failure probabilities of engineering systems is an important problem in many engineering fields. In this work we consider such problems where the failure probability is extremely small (e.g ≤10−10\leq10^{-10}). In this case, standard Monte Carlo methods are not feasible due to the extraordinarily large number of samples required. To address these problems, we propose an algorithm that combines the main ideas of two very powerful failure probability estimation approaches: the subset simulation (SS) and the multicanonical Monte Carlo (MMC) methods. Unlike the standard MMC which samples in the entire domain of the input parameter in each iteration, the proposed subset MMC algorithm adaptively performs MMC simulations in a subset of the state space and thus improves the sampling efficiency. With numerical examples we demonstrate that the proposed method is significantly more efficient than both of the SS and the MMC methods. Moreover, the proposed algorithm can reconstruct the complete distribution function of the parameter of interest and thus can provide more information than just the failure probabilities of the systems

    Adaptive design of experiment via normalizing flows for failure probability estimation

    Full text link
    Failure probability estimation problem is an crucial task in engineering. In this work we consider this problem in the situation that the underlying computer models are extremely expensive, which often arises in the practice, and in this setting, reducing the calls of computer model is of essential importance. We formulate the problem of estimating the failure probability with expensive computer models as an sequential experimental design for the limit state (i.e., the failure boundary) and propose a series of efficient adaptive design criteria to solve the design of experiment (DOE). In particular, the proposed method employs the deep neural network (DNN) as the surrogate of limit state function for efficiently reducing the calls of expensive computer experiment. A map from the Gaussian distribution to the posterior approximation of the limit state is learned by the normalizing flows for the ease of experimental design. Three normalizing-flows-based design criteria are proposed in this work for deciding the design locations based on the different assumption of generalization error. The accuracy and performance of the proposed method is demonstrated by both theory and practical examples.Comment: failure probability, normalizing flows, adaptive design of experiment. arXiv admin note: text overlap with arXiv:1509.0461

    Computationally Efficient Reliability Evaluation With Stochastic Simulation Models.

    Full text link
    Thanks to advanced computing and modeling technologies, computer simulations are becoming more widely used for the reliability evaluation of complex systems. Yet, as simulation models represent physical systems more accurately and utilize a large number of random variables to reflect various uncertainties, high computational costs remain a major challenge in analyzing the system reliability. The objective of this dissertation research is to provide new solutions for saving computational time of simulation-based reliability evaluation that considers large uncertainties within the simulation. This dissertation develops (a) a variance reduction technique for stochastic simulation models, (b) an uncertainty quantification method for the variance reduction technique, and (c) an adaptive approach of the variance reduction technique. First, among several variance reduction techniques, importance sampling has been widely used to improve the efficiency of simulation-based reliability evaluation using deterministic simulation models. In contrast to deterministic simulation models whose simulation output is uniquely determined given a fixed input, stochastic simulation models produce random outputs. We extend the theory of importance sampling to efficiently estimate a system's reliability with stochastic simulation models. Second, to quantify the uncertainty of the reliability estimation with stochastic simulation models, we can repeat the simulation experiment multiple times. It, however, multiplies computational burden. To overcome this, we establish the central limit theorem for the reliability estimator with stochastic simulation models, and construct an asymptotically valid confidence interval using data from a single simulation experiment. Lastly, theoretically optimal importance sampling densities require approximations in practice. As a candidate density to approximate the optimal density, a mixture of parametric densities can be used in the cross-entropy method that aims to minimize the cross-entropy between the optimal density and the candidate density. We propose an information criterion to identify an appropriate number of mixture densities. This criterion enables us to adaptively find the importance sampling density as we gather data through an iterative procedure. Case studies, using computationally intensive aeroelastic wind turbine simulators developed by the U.S. Department of Energy (DOE)'s National Renewable Energy Laboratory (NREL), demonstrate the superiority of the proposed approaches over alternative methods in estimating the system reliability using stochastic simulation models.PhDIndustrial and Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/120894/1/yjchoe_1.pd
    corecore