5 research outputs found

    Sequential Bayesian optimal experimental design for structural reliability analysis

    Full text link
    Structural reliability analysis is concerned with estimation of the probability of a critical event taking place, described by P(g(X)≤0)P(g(\textbf{X}) \leq 0) for some nn-dimensional random variable X\textbf{X} and some real-valued function gg. In many applications the function gg is practically unknown, as function evaluation involves time consuming numerical simulation or some other form of experiment that is expensive to perform. The problem we address in this paper is how to optimally design experiments, in a Bayesian decision theoretic fashion, when the goal is to estimate the probability P(g(X)≤0)P(g(\textbf{X}) \leq 0) using a minimal amount of resources. As opposed to existing methods that have been proposed for this purpose, we consider a general structural reliability model given in hierarchical form. We therefore introduce a general formulation of the experimental design problem, where we distinguish between the uncertainty related to the random variable X\textbf{X} and any additional epistemic uncertainty that we want to reduce through experimentation. The effectiveness of a design strategy is evaluated through a measure of residual uncertainty, and efficient approximation of this quantity is crucial if we want to apply algorithms that search for an optimal strategy. The method we propose is based on importance sampling combined with the unscented transform for epistemic uncertainty propagation. We implement this for the myopic (one-step look ahead) alternative, and demonstrate the effectiveness through a series of numerical experiments.Comment: 27 pages, 13 figure

    Active learning surrogate models for the conception of systems with multiple failure modes

    Get PDF
    International audienceDue to performance and certification criteria, complex mechanical systems have to take into account several constraints, which can be associated with a series of performance functions. Different software are generally used to evaluate such functions, whose computational cost can vary a lot. In conception or reliability analysis, we thus are interested in the identification of the boundaries of the domain where all these constraints are satisfied, at the minimal total computational cost. To this end, the present work proposes an iterative method to maximize the knowledge about these limits while trying to minimize the required number of evaluations of each performance function. This method is based first on Gaussian process surrogate models that are defined on nested sub-spaces, and second, on an original selection criterion that takes into account the computational cost associated with each performance function. After presenting the theoretical basis of this approach, this paper compares its efficiency to alternative methods on an example

    Active learning surrogate models for the conception of systems with multiple failure modes

    Get PDF
    The conception (or risk assessment) of complex mechanical systems has to take into account a series of constraints. Such constraints can be due to certification criteria, performance objectives, cost limitations, and so on. In this context, the role of simulation has kept increasing for the last decades, as it should be able to predict if a given configuration of the system is likely to fulfill these constraints without having to build it and to test it experimentally. In many cases, the computation of these constraints is associated with a series of computer software, whose physics can vary a lot. For instance, in the car industry, the conception of a new vehicle can be subjected to constraints on its size and weight, which are rather easy to compute, but also on its emergency stopping distance, its crash or aerodynamic resistance, which can be much more difficult to evaluate. Thus, several software (structure dynamics, multibody modeling, fluid dynamics software for instance) are generally needed to verify all the constraints. Numerical methods to identify the limits of such “conception domains”, that is to say domains in which all the constraints are verified, are needed [1]. As the computational costs associated with the constraints can be very different, the question then arises of how to optimize the number of evaluations of each code to minimize the uncertainties about these limits, for a given computational budget. In this prospect, this paper presents an innovative approach to predict the limits of the considered conception domain, but also to quantify the uncertainty associated with this prediction. Based on an adaptive Gaussian process regression, this method allows us to find iteratively the new code evaluations that will maximize the knowledge about the searched limits at the minimal computational cost. First, the scientific basis of such an approach will be presented. The efficiency of such a method will then be illustrated on an analytical example
    corecore