64,414 research outputs found
Metamodel-based importance sampling for structural reliability analysis
Structural reliability methods aim at computing the probability of failure of
systems with respect to some prescribed performance functions. In modern
engineering such functions usually resort to running an expensive-to-evaluate
computational model (e.g. a finite element model). In this respect simulation
methods, which may require runs cannot be used directly. Surrogate
models such as quadratic response surfaces, polynomial chaos expansions or
kriging (which are built from a limited number of runs of the original model)
are then introduced as a substitute of the original model to cope with the
computational cost. In practice it is almost impossible to quantify the error
made by this substitution though. In this paper we propose to use a kriging
surrogate of the performance function as a means to build a quasi-optimal
importance sampling density. The probability of failure is eventually obtained
as the product of an augmented probability computed by substituting the
meta-model for the original performance function and a correction term which
ensures that there is no bias in the estimation even if the meta-model is not
fully accurate. The approach is applied to analytical and finite element
reliability problems and proves efficient up to 100 random variables.Comment: 20 pages, 7 figures, 2 tables. Preprint submitted to Probabilistic
Engineering Mechanic
Metamodel-based importance sampling for the simulation of rare events
In the field of structural reliability, the Monte-Carlo estimator is
considered as the reference probability estimator. However, it is still
untractable for real engineering cases since it requires a high number of runs
of the model. In order to reduce the number of computer experiments, many other
approaches known as reliability methods have been proposed. A certain approach
consists in replacing the original experiment by a surrogate which is much
faster to evaluate. Nevertheless, it is often difficult (or even impossible) to
quantify the error made by this substitution. In this paper an alternative
approach is developed. It takes advantage of the kriging meta-modeling and
importance sampling techniques. The proposed alternative estimator is finally
applied to a finite element based structural reliability analysis.Comment: 8 pages, 3 figures, 1 table. Preprint submitted to ICASP11
Mini-symposia entitled "Meta-models/surrogate models for uncertainty
propagation, sensitivity and reliability analysis
Meta-models for structural reliability and uncertainty quantification
A meta-model (or a surrogate model) is the modern name for what was
traditionally called a response surface. It is intended to mimic the behaviour
of a computational model M (e.g. a finite element model in mechanics) while
being inexpensive to evaluate, in contrast to the original model which may take
hours or even days of computer processing time. In this paper various types of
meta-models that have been used in the last decade in the context of structural
reliability are reviewed. More specifically classical polynomial response
surfaces, polynomial chaos expansions and kriging are addressed. It is shown
how the need for error estimates and adaptivity in their construction has
brought this type of approaches to a high level of efficiency. A new technique
that solves the problem of the potential biasedness in the estimation of a
probability of failure through the use of meta-models is finally presented.Comment: Keynote lecture Fifth Asian-Pacific Symposium on Structural
Reliability and its Applications (5th APSSRA) May 2012, Singapor
Reliability approach in spacecraft structures
This paper presents an application of the probabilistic approach with reliability assessment on a spacecraft structure. The adopted strategy uses meta-modeling with first and second order polynomial functions. This method aims at minimizing computational time while giving relevant results. The first part focuses on computational tools employed in the strategy development. The second part presents a spacecraft application. The purpose is to highlight benefits of the probabilistic approach compared with the current deterministic one. From examples of reliability assessment we show some advantages which could be found in industrial applications
Explicit Mapping of Acoustic Regimes For Wind Instruments
This paper proposes a methodology to map the various acoustic regimes of wind
instruments. The maps can be generated in a multi-dimensional space consisting
of design, control parameters, and initial conditions. The bound- aries of the
maps are obtained explicitly in terms of the parameters using a support vector
machine (SVM) classifier as well as a dedicated adaptive sam- pling scheme. The
approach is demonstrated on a simplified clarinet model for which several maps
are generated based on different criteria. Examples of computation of the
probability of occurrence of a specific acoustic regime are also provided. In
addition, the approach is demonstrated on a design optimization example for
optimal intonation
mfEGRA: Multifidelity Efficient Global Reliability Analysis through Active Learning for Failure Boundary Location
This paper develops mfEGRA, a multifidelity active learning method using
data-driven adaptively refined surrogates for failure boundary location in
reliability analysis. This work addresses the issue of prohibitive cost of
reliability analysis using Monte Carlo sampling for expensive-to-evaluate
high-fidelity models by using cheaper-to-evaluate approximations of the
high-fidelity model. The method builds on the Efficient Global Reliability
Analysis (EGRA) method, which is a surrogate-based method that uses adaptive
sampling for refining Gaussian process surrogates for failure boundary location
using a single-fidelity model. Our method introduces a two-stage adaptive
sampling criterion that uses a multifidelity Gaussian process surrogate to
leverage multiple information sources with different fidelities. The method
combines expected feasibility criterion from EGRA with one-step lookahead
information gain to refine the surrogate around the failure boundary. The
computational savings from mfEGRA depends on the discrepancy between the
different models, and the relative cost of evaluating the different models as
compared to the high-fidelity model. We show that accurate estimation of
reliability using mfEGRA leads to computational savings of 46% for an
analytic multimodal test problem and 24% for a three-dimensional acoustic horn
problem, when compared to single-fidelity EGRA. We also show the effect of
using a priori drawn Monte Carlo samples in the implementation for the acoustic
horn problem, where mfEGRA leads to computational savings of 45% for the
three-dimensional case and 48% for a rarer event four-dimensional case as
compared to single-fidelity EGRA
- …