11 research outputs found
Variance-based reliability sensitivity with dependent inputs using failure samples
Reliability sensitivity analysis is concerned with measuring the influence of
a system's uncertain input parameters on its probability of failure.
Statistically dependent inputs present a challenge in both computing and
interpreting these sensitivity indices; such dependencies require discerning
between variable interactions produced by the probabilistic model describing
the system inputs and the computational model describing the system itself. To
accomplish such a separation of effects in the context of reliability
sensitivity analysis we extend on an idea originally proposed by Mara and
Tarantola (2012) for model outputs unrelated to rare events. We compute the
independent (influence via computational model) and full (influence via both
computational and probabilistic model) contributions of all inputs to the
variance of the indicator function of the rare event. We compute this full set
of variance-based sensitivity indices of the rare event indicator using a
single set of failure samples. This is possible by considering different
hierarchically structured isoprobabilistic transformations of this set of
failure samples from the original -dimensional space of dependent inputs to
standard-normal space. The approach facilitates computing the full set of
variance-based reliability sensitivity indices with a single set of failure
samples obtained as the byproduct of a single run of a sample-based rare event
estimation method. That is, no additional evaluations of the computational
model are required. We demonstrate the approach on a test function and two
engineering problems
Global sensitivity analysis in high dimensions with partial least squares-driven PCEs
We develop an efficient method for the computation of variance-based sensitivity indices using a recently introduced latent-variable-based polynomial chaos expansion, which is particularly suitable for high dimensional problems. By back-transforming the surrogate from its latent variable space-basis to the original input variable space-basis, we derive analytical expressions for these sensitivities that only depend on the model coefficients. Thus, once the surrogate model is built, the variance-based sensitivities can be computed at negligible computational cost as no additional sampling is required. The accuracy of the method is demonstrated with a numerical experiment of an elastic truss.This project was supported by the German Research Foundation (DFG) through Grant STR 1140/6-1 under SPP 1886
Sequential active learning of low-dimensional model representations for reliability analysis
To date, the analysis of high-dimensional, computationally expensive
engineering models remains a difficult challenge in risk and reliability
engineering. We use a combination of dimensionality reduction and surrogate
modelling termed partial least squares-driven polynomial chaos expansion
(PLS-PCE) to render such problems feasible. Standalone surrogate models
typically perform poorly for reliability analysis. Therefore, in a previous
work, we have used PLS-PCEs to reconstruct the intermediate densities of a
sequential importance sampling approach to reliability analysis. Here, we
extend this approach with an active learning procedure that allows for improved
error control at each importance sampling level. To this end, we formulate an
estimate of the combined estimation error for both the subspace identified in
the dimension reduction step and surrogate model constructed therein. With
this, it is possible to adapt the design of experiments so as to optimally
learn the subspace representation and the surrogate model constructed therein.
The approach is gradient-free and thus can be directly applied to black
box-type models. We demonstrate the performance of this approach with a series
of low- (2 dimensions) to high- (869 dimensions) dimensional example problems
featuring a number of well-known caveats for reliability methods besides high
dimensions and expensive computational models: strongly nonlinear limit-state
functions, multiple relevant failure regions and small probabilities of
failure
Certified Dimension Reduction for Bayesian Updating with the Cross-Entropy Method
In inverse problems, the parameters of a model are estimated based on
observations of the model response. The Bayesian approach is powerful for
solving such problems; one formulates a prior distribution for the parameter
state that is updated with the observations to compute the posterior parameter
distribution. Solving for the posterior distribution can be challenging when,
e.g., prior and posterior significantly differ from one another and/or the
parameter space is high-dimensional. We use a sequence of importance sampling
measures that arise by tempering the likelihood to approach inverse problems
exhibiting a significant distance between prior and posterior. Each importance
sampling measure is identified by cross-entropy minimization as proposed in the
context of Bayesian inverse problems in Engel et al. (2021). To efficiently
address problems with high-dimensional parameter spaces we set up the
minimization procedure in a low-dimensional subspace of the original parameter
space. The principal idea is to analyse the spectrum of the second-moment
matrix of the gradient of the log-likelihood function to identify a suitable
subspace. Following Zahm et al. (2021), an upper bound on the
Kullback-Leibler-divergence between full-dimensional and subspace posterior is
provided, which can be utilized to determine the effective dimension of the
inverse problem corresponding to a prescribed approximation error bound. We
suggest heuristic criteria for optimally selecting the number of model and
model gradient evaluations in each iteration of the importance sampling
sequence. We investigate the performance of this approach using examples from
engineering mechanics set in various parameter space dimensions.Comment: 31 pages, 12 figure
Recommended from our members
Electronic defects in Cu(In,Ga)Se2: Towards a comprehensive model
The electronic defects in any semiconductor play a decisive role for the usability of this material in an optoelectronic device. Electronic defects determine the doping level as well as the recombination centers of a solar cell absorber. Cu(In,Ga)Se2 is used in thin-film solar cells with high and stable efficiencies. The electronic defects in this class of materials have been studied experimentally by photoluminescence, admittance, and photocurrent spectroscopies for many decades now. The literature results are summarized and compared to new results by photoluminescence of deep defects. These observations are related to other experimental methods that investigate the physicochemical structure of defects. To finally assign the electronic defect signatures to actual physicochemical defects, a comparison with theoretical predictions is necessary. In recent years the accuracy of these calculations has greatly improved by the use of hybrid functionals. A comprehensive model of the electronic defects in Cu(In,Ga)Se2 is proposed based on experiments and theory. The consequences for solar cell efficiency are discussed