43 research outputs found
Bayesian subset simulation
We consider the problem of estimating a probability of failure ,
defined as the volume of the excursion set of a function above a given threshold, under a given
probability measure on . In this article, we combine the popular
subset simulation algorithm (Au and Beck, Probab. Eng. Mech. 2001) and our
sequential Bayesian approach for the estimation of a probability of failure
(Bect, Ginsbourger, Li, Picheny and Vazquez, Stat. Comput. 2012). This makes it
possible to estimate when the number of evaluations of is very
limited and is very small. The resulting algorithm is called Bayesian
subset simulation (BSS). A key idea, as in the subset simulation algorithm, is
to estimate the probabilities of a sequence of excursion sets of above
intermediate thresholds, using a sequential Monte Carlo (SMC) approach. A
Gaussian process prior on is used to define the sequence of densities
targeted by the SMC algorithm, and drive the selection of evaluation points of
to estimate the intermediate probabilities. Adaptive procedures are
proposed to determine the intermediate thresholds and the number of evaluations
to be carried out at each stage of the algorithm. Numerical experiments
illustrate that BSS achieves significant savings in the number of function
evaluations with respect to other Monte Carlo approaches
Bayesian Subset Simulation: a kriging-based subset simulation algorithm for the estimation of small probabilities of failure
The estimation of small probabilities of failure from computer simulations is
a classical problem in engineering, and the Subset Simulation algorithm
proposed by Au & Beck (Prob. Eng. Mech., 2001) has become one of the most
popular method to solve it. Subset simulation has been shown to provide
significant savings in the number of simulations to achieve a given accuracy of
estimation, with respect to many other Monte Carlo approaches. The number of
simulations remains still quite high however, and this method can be
impractical for applications where an expensive-to-evaluate computer model is
involved. We propose a new algorithm, called Bayesian Subset Simulation, that
takes the best from the Subset Simulation algorithm and from sequential
Bayesian methods based on kriging (also known as Gaussian process modeling).
The performance of this new algorithm is illustrated using a test case from the
literature. We are able to report promising results. In addition, we provide a
numerical study of the statistical properties of the estimator.Comment: 11th International Probabilistic Assessment and Management Conference
(PSAM11) and The Annual European Safety and Reliability Conference (ESREL
2012), Helsinki : Finland (2012
Gaussian process surrogates for failure detection: a Bayesian experimental design approach
An important task of uncertainty quantification is to identify {the
probability of} undesired events, in particular, system failures, caused by
various sources of uncertainties. In this work we consider the construction of
Gaussian {process} surrogates for failure detection and failure probability
estimation. In particular, we consider the situation that the underlying
computer models are extremely expensive, and in this setting, determining the
sampling points in the state space is of essential importance. We formulate the
problem as an optimal experimental design for Bayesian inferences of the limit
state (i.e., the failure boundary) and propose an efficient numerical scheme to
solve the resulting optimization problem. In particular, the proposed
limit-state inference method is capable of determining multiple sampling points
at a time, and thus it is well suited for problems where multiple computer
simulations can be performed in parallel. The accuracy and performance of the
proposed method is demonstrated by both academic and practical examples
Partially Bayesian active learning cubature for structural reliability analysis with extremely small failure probabilities
The Bayesian failure probability inference (BFPI) framework provides a well-established Bayesian approach to quantifying our epistemic uncertainty about the failure probability resulting from a limited number of performance function evaluations. However, it is still challenging to perform Bayesian active learning of the failure probability by taking advantage of the BFPI framework. In this work, three Bayesian active learning methods are proposed under the name ‘partially Bayesian active learning cubature’ (PBALC), based on a cleaver use of the BFPI framework for structural reliability analysis, especially when small failure probabilities are involved. Since the posterior variance of the failure probability is computationally expensive to evaluate, the underlying idea is to exploit only the posterior mean of the failure probability to design two critical components for Bayesian active learning, i.e., the stopping criterion and the learning function. On this basis, three sets of stopping criteria and learning functions are proposed, resulting in the three proposed methods PBALC1, PBALC2 and PBALC3. Furthermore, the analytically intractable integrals involved in the stopping criteria are properly addressed from a numerical point of view. Five numerical examples are studied to demonstrate the performance of the three proposed methods. It is found empirically that the proposed methods can assess very small failure probabilities and significantly outperform several existing methods in terms of accuracy and efficiency
A Bayesian approach to constrained single- and multi-objective optimization
This article addresses the problem of derivative-free (single- or
multi-objective) optimization subject to multiple inequality constraints. Both
the objective and constraint functions are assumed to be smooth, non-linear and
expensive to evaluate. As a consequence, the number of evaluations that can be
used to carry out the optimization is very limited, as in complex industrial
design optimization problems. The method we propose to overcome this difficulty
has its roots in both the Bayesian and the multi-objective optimization
literatures. More specifically, an extended domination rule is used to handle
objectives and constraints in a unified way, and a corresponding expected
hyper-volume improvement sampling criterion is proposed. This new criterion is
naturally adapted to the search of a feasible point when none is available, and
reduces to existing Bayesian sampling criteria---the classical Expected
Improvement (EI) criterion and some of its constrained/multi-objective
extensions---as soon as at least one feasible point is available. The
calculation and optimization of the criterion are performed using Sequential
Monte Carlo techniques. In particular, an algorithm similar to the subset
simulation method, which is well known in the field of structural reliability,
is used to estimate the criterion. The method, which we call BMOO (for Bayesian
Multi-Objective Optimization), is compared to state-of-the-art algorithms for
single- and multi-objective constrained optimization
Adaptive design of experiment via normalizing flows for failure probability estimation
Failure probability estimation problem is an crucial task in engineering. In
this work we consider this problem in the situation that the underlying
computer models are extremely expensive, which often arises in the practice,
and in this setting, reducing the calls of computer model is of essential
importance. We formulate the problem of estimating the failure probability with
expensive computer models as an sequential experimental design for the limit
state (i.e., the failure boundary) and propose a series of efficient adaptive
design criteria to solve the design of experiment (DOE). In particular, the
proposed method employs the deep neural network (DNN) as the surrogate of limit
state function for efficiently reducing the calls of expensive computer
experiment. A map from the Gaussian distribution to the posterior approximation
of the limit state is learned by the normalizing flows for the ease of
experimental design. Three normalizing-flows-based design criteria are proposed
in this work for deciding the design locations based on the different
assumption of generalization error. The accuracy and performance of the
proposed method is demonstrated by both theory and practical examples.Comment: failure probability, normalizing flows, adaptive design of
experiment. arXiv admin note: text overlap with arXiv:1509.0461