9,671 research outputs found
Inverse Uncertainty Quantification using the Modular Bayesian Approach based on Gaussian Process, Part 2: Application to TRACE
Inverse Uncertainty Quantification (UQ) is a process to quantify the
uncertainties in random input parameters while achieving consistency between
code simulations and physical observations. In this paper, we performed inverse
UQ using an improved modular Bayesian approach based on Gaussian Process (GP)
for TRACE physical model parameters using the BWR Full-size Fine-Mesh Bundle
Tests (BFBT) benchmark steady-state void fraction data. The model discrepancy
is described with a GP emulator. Numerical tests have demonstrated that such
treatment of model discrepancy can avoid over-fitting. Furthermore, we
constructed a fast-running and accurate GP emulator to replace TRACE full model
during Markov Chain Monte Carlo (MCMC) sampling. The computational cost was
demonstrated to be reduced by several orders of magnitude.
A sequential approach was also developed for efficient test source allocation
(TSA) for inverse UQ and validation. This sequential TSA methodology first
selects experimental tests for validation that has a full coverage of the test
domain to avoid extrapolation of model discrepancy term when evaluated at input
setting of tests for inverse UQ. Then it selects tests that tend to reside in
the unfilled zones of the test domain for inverse UQ, so that one can extract
the most information for posterior probability distributions of calibration
parameters using only a relatively small number of tests. This research
addresses the "lack of input uncertainty information" issue for TRACE physical
input parameters, which was usually ignored or described using expert opinion
or user self-assessment in previous work. The resulting posterior probability
distributions of TRACE parameters can be used in future uncertainty,
sensitivity and validation studies of TRACE code for nuclear reactor system
design and safety analysis
Conformally Mapped Polynomial Chaos Expansions for Maxwell's Source Problem with Random Input Data
Generalized Polynomial Chaos (gPC) expansions are well established for
forward uncertainty propagation in many application areas. Although the
associated computational effort may be reduced in comparison to Monte Carlo
techniques, for instance, further convergence acceleration may be important to
tackle problems with high parametric sensitivities. In this work, we propose
the use of conformal maps to construct a transformed gPC basis, in order to
enhance the convergence order. The proposed basis still features orthogonality
properties and hence, facilitates the computation of many statistical
properties such as sensitivities and moments. The corresponding surrogate
models are computed by pseudo-spectral projection using mapped quadrature
rules, which leads to an improved cost accuracy ratio. We apply the methodology
to Maxwell's source problem with random input data. In particular, numerical
results for a parametric finite element model of an optical grating coupler are
given
Multifidelity Uncertainty Quantification of a Commercial Supersonic Transport
The objective of this work was to develop a multifidelity uncertainty quantification approach for efficient analysis of a commercial supersonic transport. An approach based on non-intrusive polynomial chaos was formulated in which a low-fidelity model could be corrected by any number of high-fidelity models. The formulation and methodology also allows for the addition of uncertainty sources not present in the lower fidelity models. To demonstrate the applicability of the multifidelity polynomial chaos approach, two model problems were explored. The first was supersonic airfoil with three levels of modeling fidelity, each capturing an additional level of physics. The second problem was a commercial supersonic transport. This model had three levels of fidelity that included two different modeling approaches and the addition of physics between the fidelity levels. Both problems illustrate the applicability and significant computational savings of the multifidelity polynomial chaos method
Comparison of data-driven uncertainty quantification methods for a carbon dioxide storage benchmark scenario
A variety of methods is available to quantify uncertainties arising with\-in
the modeling of flow and transport in carbon dioxide storage, but there is a
lack of thorough comparisons. Usually, raw data from such storage sites can
hardly be described by theoretical statistical distributions since only very
limited data is available. Hence, exact information on distribution shapes for
all uncertain parameters is very rare in realistic applications. We discuss and
compare four different methods tested for data-driven uncertainty
quantification based on a benchmark scenario of carbon dioxide storage. In the
benchmark, for which we provide data and code, carbon dioxide is injected into
a saline aquifer modeled by the nonlinear capillarity-free fractional flow
formulation for two incompressible fluid phases, namely carbon dioxide and
brine. To cover different aspects of uncertainty quantification, we incorporate
various sources of uncertainty such as uncertainty of boundary conditions, of
conceptual model definitions and of material properties. We consider recent
versions of the following non-intrusive and intrusive uncertainty
quantification methods: arbitary polynomial chaos, spatially adaptive sparse
grids, kernel-based greedy interpolation and hybrid stochastic Galerkin. The
performance of each approach is demonstrated assessing expectation value and
standard deviation of the carbon dioxide saturation against a reference
statistic based on Monte Carlo sampling. We compare the convergence of all
methods reporting on accuracy with respect to the number of model runs and
resolution. Finally we offer suggestions about the methods' advantages and
disadvantages that can guide the modeler for uncertainty quantification in
carbon dioxide storage and beyond
Bayesian quantification of thermodynamic uncertainties in dense gas flows
A Bayesian inference methodology is developed for calibrating complex equations of state used in numerical fluid flow solvers. Precisely, the input parameters of three equations of state commonly used for modeling the thermodynamic behavior of so-called dense gas flows, â i.e. flows of gases characterized by high molecular weights and complex molecules, working in thermodynamic conditions close to the liquid-vapor saturation curveâ, are calibrated by means of Bayesian inference from reference aerodynamic data for a dense gas flow over a wing section. Flow thermodynamic conditions are such that the gas thermodynamic behavior strongly deviates from that of a perfect gas. In the aim of assessing the proposed methodology, synthetic calibration data âspecifically, wall pressure dataâ are generated by running the numerical solver with a more complex and accurate thermodynamic model. The statistical model used to build the likelihood function includes a model-form inadequacy term, accounting for the gap between the model output associated to the best-fit parameters, and the rue phenomenon. Results show that, for all of the relatively simple models under investigation, calibrations lead to informative posterior probability density distributions of the input parameters and improve the predictive distribution significantly. Nevertheless, calibrated parameters strongly differ from their expected physical values. The relationship between this behavior and model-form inadequacy is discussed.ANR-11-MONU-008-00
- âŠ