3,852 research outputs found
Polynomial-Chaos-based Kriging
Computer simulation has become the standard tool in many engineering fields
for designing and optimizing systems, as well as for assessing their
reliability. To cope with demanding analysis such as optimization and
reliability, surrogate models (a.k.a meta-models) have been increasingly
investigated in the last decade. Polynomial Chaos Expansions (PCE) and Kriging
are two popular non-intrusive meta-modelling techniques. PCE surrogates the
computational model with a series of orthonormal polynomials in the input
variables where polynomials are chosen in coherency with the probability
distributions of those input variables. On the other hand, Kriging assumes that
the computer model behaves as a realization of a Gaussian random process whose
parameters are estimated from the available computer runs, i.e. input vectors
and response values. These two techniques have been developed more or less in
parallel so far with little interaction between the researchers in the two
fields. In this paper, PC-Kriging is derived as a new non-intrusive
meta-modeling approach combining PCE and Kriging. A sparse set of orthonormal
polynomials (PCE) approximates the global behavior of the computational model
whereas Kriging manages the local variability of the model output. An adaptive
algorithm similar to the least angle regression algorithm determines the
optimal sparse set of polynomials. PC-Kriging is validated on various benchmark
analytical functions which are easy to sample for reference results. From the
numerical investigations it is concluded that PC-Kriging performs better than
or at least as good as the two distinct meta-modeling techniques. A larger gain
in accuracy is obtained when the experimental design has a limited size, which
is an asset when dealing with demanding computational models
Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)
Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope
with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through
routing models. The most important input to debris \ufb02ow routing models are the
topographic data, usually in the form of Digital Elevation Models (DEMs). The quality
of DEMs depends on the accuracy, density, and spatial distribution of the sampled
points; on the characteristics of the surface; and on the applied gridding methodology.
Therefore, the choice of the interpolation method affects the realistic representation
of the channel and fan morphology, and thus potentially the debris \ufb02ow routing
modeling outcomes. In this paper, we initially investigate the performance of common
interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor,
Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging)
in building DEMs with the complex topography of a debris \ufb02ow channel located
in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full-
waveform Light Detection And Ranging (LiDAR) data. The investigation is carried
out through a combination of statistical analysis of vertical accuracy, algorithm
robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability
assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms
on the performance of a Geographic Information System (GIS)-based cell model for
simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation
between the DEMs heights uncertainty resulting from the gridding procedure and
that on the corresponding simulated erosion/deposition depths, both the effect of
interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid
discharges, and channel morphology after the event. The comparison among the tested
interpolation methods highlights that the ANUDEM and ordinary kriging algorithms
are not suitable for building DEMs with complex topography. Conversely, the linear
triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy
and shape reliability. Anyway, the evaluation of the effects of gridding techniques on
debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does
not signi\ufb01cantly affect the model outcomes
Quantile-based optimization under uncertainties using adaptive Kriging surrogate models
Uncertainties are inherent to real-world systems. Taking them into account is
crucial in industrial design problems and this might be achieved through
reliability-based design optimization (RBDO) techniques. In this paper, we
propose a quantile-based approach to solve RBDO problems. We first transform
the safety constraints usually formulated as admissible probabilities of
failure into constraints on quantiles of the performance criteria. In this
formulation, the quantile level controls the degree of conservatism of the
design. Starting with the premise that industrial applications often involve
high-fidelity and time-consuming computational models, the proposed approach
makes use of Kriging surrogate models (a.k.a. Gaussian process modeling).
Thanks to the Kriging variance (a measure of the local accuracy of the
surrogate), we derive a procedure with two stages of enrichment of the design
of computer experiments (DoE) used to construct the surrogate model. The first
stage globally reduces the Kriging epistemic uncertainty and adds points in the
vicinity of the limit-state surfaces describing the system performance to be
attained. The second stage locally checks, and if necessary, improves the
accuracy of the quantiles estimated along the optimization iterations.
Applications to three analytical examples and to the optimal design of a car
body subsystem (minimal mass under mechanical safety constraints) show the
accuracy and the remarkable efficiency brought by the proposed procedure
Reliability-based design optimization of shells with uncertain geometry using adaptive Kriging metamodels
Optimal design under uncertainty has gained much attention in the past ten
years due to the ever increasing need for manufacturers to build robust systems
at the lowest cost. Reliability-based design optimization (RBDO) allows the
analyst to minimize some cost function while ensuring some minimal performances
cast as admissible failure probabilities for a set of performance functions. In
order to address real-world engineering problems in which the performance is
assessed through computational models (e.g., finite element models in
structural mechanics) metamodeling techniques have been developed in the past
decade. This paper introduces adaptive Kriging surrogate models to solve the
RBDO problem. The latter is cast in an augmented space that "sums up" the range
of the design space and the aleatory uncertainty in the design parameters and
the environmental conditions. The surrogate model is used (i) for evaluating
robust estimates of the failure probabilities (and for enhancing the
computational experimental design by adaptive sampling) in order to achieve the
requested accuracy and (ii) for applying a gradient-based optimization
algorithm to get optimal values of the design parameters. The approach is
applied to the optimal design of ring-stiffened cylindrical shells used in
submarine engineering under uncertain geometric imperfections. For this
application the performance of the structure is related to buckling which is
addressed here by means of a finite element solution based on the asymptotic
numerical method
Gaussian process surrogates for failure detection: a Bayesian experimental design approach
An important task of uncertainty quantification is to identify {the
probability of} undesired events, in particular, system failures, caused by
various sources of uncertainties. In this work we consider the construction of
Gaussian {process} surrogates for failure detection and failure probability
estimation. In particular, we consider the situation that the underlying
computer models are extremely expensive, and in this setting, determining the
sampling points in the state space is of essential importance. We formulate the
problem as an optimal experimental design for Bayesian inferences of the limit
state (i.e., the failure boundary) and propose an efficient numerical scheme to
solve the resulting optimization problem. In particular, the proposed
limit-state inference method is capable of determining multiple sampling points
at a time, and thus it is well suited for problems where multiple computer
simulations can be performed in parallel. The accuracy and performance of the
proposed method is demonstrated by both academic and practical examples
Sequential design of computer experiments for the estimation of a probability of failure
This paper deals with the problem of estimating the volume of the excursion
set of a function above a given threshold,
under a probability measure on that is assumed to be known. In
the industrial world, this corresponds to the problem of estimating a
probability of failure of a system. When only an expensive-to-simulate model of
the system is available, the budget for simulations is usually severely limited
and therefore classical Monte Carlo methods ought to be avoided. One of the
main contributions of this article is to derive SUR (stepwise uncertainty
reduction) strategies from a Bayesian-theoretic formulation of the problem of
estimating a probability of failure. These sequential strategies use a Gaussian
process model of and aim at performing evaluations of as efficiently as
possible to infer the value of the probability of failure. We compare these
strategies to other strategies also based on a Gaussian process model for
estimating a probability of failure.Comment: This is an author-generated postprint version. The published version
is available at http://www.springerlink.co
- …