5,152 research outputs found

    mfEGRA: Multifidelity Efficient Global Reliability Analysis through Active Learning for Failure Boundary Location

    Full text link
    This paper develops mfEGRA, a multifidelity active learning method using data-driven adaptively refined surrogates for failure boundary location in reliability analysis. This work addresses the issue of prohibitive cost of reliability analysis using Monte Carlo sampling for expensive-to-evaluate high-fidelity models by using cheaper-to-evaluate approximations of the high-fidelity model. The method builds on the Efficient Global Reliability Analysis (EGRA) method, which is a surrogate-based method that uses adaptive sampling for refining Gaussian process surrogates for failure boundary location using a single-fidelity model. Our method introduces a two-stage adaptive sampling criterion that uses a multifidelity Gaussian process surrogate to leverage multiple information sources with different fidelities. The method combines expected feasibility criterion from EGRA with one-step lookahead information gain to refine the surrogate around the failure boundary. The computational savings from mfEGRA depends on the discrepancy between the different models, and the relative cost of evaluating the different models as compared to the high-fidelity model. We show that accurate estimation of reliability using mfEGRA leads to computational savings of \sim46% for an analytic multimodal test problem and 24% for a three-dimensional acoustic horn problem, when compared to single-fidelity EGRA. We also show the effect of using a priori drawn Monte Carlo samples in the implementation for the acoustic horn problem, where mfEGRA leads to computational savings of 45% for the three-dimensional case and 48% for a rarer event four-dimensional case as compared to single-fidelity EGRA

    Robust optimisation of computationally expensive models using adaptive multi-fidelity emulation

    Get PDF
    Computationally expensive models are increasingly employed in the design process of engineering products and systems. Robust design in particular aims to obtain designs that exhibit near-optimal performance and low variability under uncertainty. Surrogate models are often employed to imitate the behaviour of expensive computational models. Surrogates are trained from a reduced number of samples of the expensive model. A crucial component of the performance of a surrogate is the quality of the training set. Problems occur when sampling fails to obtain points located in an area of interest and/or where the computational budget only allows for a very limited number of runs of the expensive model. This paper employs a Gaussian process emulation approach to perform efficient single-loop robust optimisation of expensive models. The emulator is enhanced to propagate input uncertainty to the emulator output, allowing single-loop robust optimisation. Further, the emulator is trained with multi-fidelity data obtained via adaptive sampling to maximise the quality of the training set for the given computational budget. An illustrative example is presented to highlight how the method works, before it is applied to two industrial case studies

    UQ and AI: data fusion, inverse identification, and multiscale uncertainty propagation in aerospace components

    Get PDF
    A key requirement for engineering designs is that they offer good performance across a range of uncertain conditions while exhibiting an admissibly low probability of failure. In order to design components that offer good performance across a range of uncertain conditions, it is necessary to take account of the effect of the uncertainties associated with a candidate design. Uncertainty Quantification (UQ) methods are statistical methods that may be used to quantify the effect of the uncertainties inherent in a system on its performance. This thesis expands the envelope of UQ methods for the design of aerospace components, supporting the integration of UQ methods in product development by addressing four industrial challenges. Firstly, a method for propagating uncertainty through computational models in a hierachy of scales is described that is based on probabilistic equivalence and Non-Intrusive Polynomial Chaos (NIPC). This problem is relevant to the design of aerospace components as the computational models used to evaluate candidate designs are typically multiscale. This method was then extended to develop a formulation for inverse identification, where the probability distributions for the material properties of a coupon are deduced from measurements of its response. We demonstrate how probabilistic equivalence and the Maximum Entropy Principle (MEP) may be used to leverage data from simulations with scarce experimental data- with the intention of making this stage of product design less expensive and time consuming. The third contribution of this thesis is to develop two novel meta-modelling strategies to promote the wider exploration of the design space during the conceptual design phase. Design Space Exploration (DSE) in this phase is crucial as decisions made at the early, conceptual stages of an aircraft design can restrict the range of alternative designs available at later stages in the design process, despite limited quantitative knowledge of the interaction between requirements being available at this stage. A histogram interpolation algorithm is presented that allows the designer to interactively explore the design space with a model-free formulation, while a meta-model based on Knowledge Based Neural Networks (KBaNNs) is proposed in which the outputs of a high-level, inexpensive computer code are informed by the outputs of a neural network, in this way addressing the criticism of neural networks that they are purely data-driven and operate as black boxes. The final challenge addressed by this thesis is how to iteratively improve a meta-model by expanding the dataset used to train it. Given the reliance of UQ methods on meta-models this is an important challenge. This thesis proposes an adaptive learning algorithm for Support Vector Machine (SVM) metamodels, which are used to approximate an unknown function. In particular, we apply the adaptive learning algorithm to test cases in reliability analysis.Open Acces
    corecore