34 research outputs found
Bayesian inference and non-linear extensions of the CIRCE method for quantifying the uncertainty of closure relationships integrated into thermal-hydraulic system codes
Uncertainty Quantification of closure relationships integrated into
thermal-hydraulic system codes is a critical prerequisite in applying the
Best-Estimate Plus Uncertainty (BEPU) methodology for nuclear safety and
licensing processes.The purpose of the CIRCE method is to estimate the
(log)-Gaussian probability distribution of a multiplicative factor applied to a
reference closure relationship in order to assess its uncertainty. Even though
this method has been implemented with success in numerous physical scenarios,
it can still suffer from substantial limitations such as the linearity
assumption and the difficulty of properly taking into account the inherent
statistical uncertainty. In the paper, we will extend the CIRCE method in two
aspects. On the one hand, we adopt the Bayesian setting putting prior
probability distributions on the parameters of the (log)-Gaussian distribution.
The posterior distribution of the parameters is then computed with respect to
an experimental database by means of Markov Chain Monte Carlo (MCMC)
algorithms. On the other hand, we tackle the more general setting where the
simulations do not move linearly against the multiplicative factor(s). MCMC
algorithms then become time-prohibitive when the thermal-hydraulic simulations
exceed a few minutes. This handicap is overcome by using Gaussian process (GP)
emulators which can yield both reliable and fast predictions of the
simulations. The GP-based MCMC algorithms will be applied to quantify the
uncertainty of two condensation closure relationships at a safety injection
with respect to a database of experimental tests. The thermal-hydraulic
simulations will be run with the CATHARE 2 computer code.Comment: 37 pages, 5 figure
Adaptive numerical designs for the calibration of computer codes
Making good predictions of a physical system using a computer code requires
the inputs to be carefully specified. Some of these inputs called control
variables have to reproduce physical conditions whereas other inputs, called
parameters, are specific to the computer code and most often uncertain. The
goal of statistical calibration consists in estimating these parameters with
the help of a statistical model which links the code outputs with the field
measurements. In a Bayesian setting, the posterior distribution of these
parameters is normally sampled using MCMC methods. However, they are
impractical when the code runs are high time-consuming. A way to circumvent
this issue consists of replacing the computer code with a Gaussian process
emulator, then sampling a cheap-to-evaluate posterior distribution based on it.
Doing so, calibration is subject to an error which strongly depends on the
numerical design of experiments used to fit the emulator. We aim at reducing
this error by building a proper sequential design by means of the Expected
Improvement criterion. Numerical illustrations in several dimensions assess the
efficiency of such sequential strategies
Numerical studies of space filling designs: optimization of Latin Hypercube Samples and subprojection properties
International audienceQuantitative assessment of the uncertainties tainting the results of computer simulations is nowadays a major topic of interest in both industrial and scientific communities. One of the key issues in such studies is to get information about the output when the numerical simulations are expensive to run. This paper considers the problem of exploring the whole space of variations of the computer model input variables in the context of a large dimensional exploration space. Various properties of space filling designs are justified: interpoint-distance, discrepancy, minimum spanning tree criteria. A specific class of design, the optimized Latin Hypercube Sample, is considered. Several optimization algorithms, coming from the literature, are studied in terms of convergence speed, robustness to subprojection and space filling properties of the resulting design. Some recommendations for building such designs are given. Finally, another contribution of this paper is the deep analysis of the space filling properties of the design 2D-subprojections
A generalization of the CIRCE method for quantifying input model uncertainty in presence of several groups of experiments
The semi-empirical nature of best-estimate models closing the balance
equations of thermal-hydraulic (TH) system codes is well-known as a significant
source of uncertainty for accuracy of output predictions. This uncertainty,
called model uncertainty, is usually represented by multiplicative
(log-)Gaussian variables whose estimation requires solving an inverse problem
based on a set of adequately chosen real experiments. One method from the TH
field, called CIRCE, addresses it. We present in the paper a generalization of
this method to several groups of experiments each having their own properties,
including different ranges for input conditions and different geometries. An
individual (log-)Gaussian distribution is therefore estimated for each group in
order to investigate whether the model uncertainty is homogeneous between the
groups, or should depend on the group. To this end, a multi-group CIRCE is
proposed where a variance parameter is estimated for each group jointly to a
mean parameter common to all the groups to preserve the uniqueness of the
best-estimate model. The ECME algorithm for Maximum Likelihood Estimation is
adapted to the latter context, then applied to relevant demonstration cases.
Finally, it is tested on a practical case to assess the uncertainty of critical
mass flow assuming two groups due to the difference of geometry between the
experimental setups.Comment: 26 pages, 7 figure
Statistical contributions to code calibration and validation
La validation des codes de calcul a pour but d’évaluer l’incertitude de prédiction d’un système physique à partir d’un code de calcul l’approchant et des mesures physiques disponibles. D’une part, le code peut ne pas être une représentation exacte de la réalité. D’autre part, le code peut être entaché d’une incertitude affectant la valeur de certains de ses paramètres, dont l’estimation est appelée « calage de code ». Après avoir dressé un état de l’art unifié des principales procédures de calage et de validation des codes de calcul, nous proposons plusieurs contributions à ces deux problématiques lorsque le code est appréhendé comme une fonction boîte noire coûteuse. D’abord, nous développons une technique bayésienne de sélection de modèle pour tester l’existence d’une fonction d’erreur entre les réponses du code et le système physique, appelée « erreur de code ». Ensuite, nous présentons de nouveaux algorithmes destinés à la construction de plans d’expériences séquentiels afin de rendre plus précis le calage d’un code de calcul basé sur l’émulation par un processus gaussien. Enfin, nous validons un code de calcul utilisé pour prédire la consommation énergétique d’un bâtiment au cours d’une période de temps. Nous utilisons les résultats de l’étude de validation pour apporter une solution à un problème de statistique décisionnelle dans lequel un fournisseur d’électricité doit s’engager auprès de ses clients sur des prévisions moyennes de consommation. En utilisant la théorie bayésienne de la décision, des estimateurs ponctuels optimaux sont calculés.Code validation aims at assessing the uncertainty affecting the predictions of a physical system by using both the outputs of a computer code which attempt to reproduce it and the available field measurements. In the one hand, the codemay be not a perfect representation of the reality. In the other hand, some code parameters can be uncertain and need to be estimated: this issue is referred to as code calibration. After having provided a unified view of the main procedures of code validation, we propose several contributions for solving some issues arising in computer codes which are both costly and considered as black-box functions. First, we develop a Bayesian testing procedure to detect whether or not a discrepancy function, called code discrepancy, has to be taken into account between the code outputs and the physical system. Second, we present new algorithms for building sequential designs of experiments in order to reduce the error occurring in the calibration process based on a Gaussian process emulator. Lastly, a validation procedure of a thermal code is conducted as the preliminary step of a decision problem where an energy supplier has to commit for an overall energy consumption forecast to customers. Based on the Bayesian decision theory, some optimal plug-in estimators are computed
Contributions statistiques au calage et Ă la validation des codes de calcul
Code validation aims at assessing the uncertainty affecting the predictions of a physical system by using both the outputs of a computer code which attempt to reproduce it and the available field measurements. In the one hand, the codemay be not a perfect representation of the reality. In the other hand, some code parameters can be uncertain and need to be estimated: this issue is referred to as code calibration. After having provided a unified view of the main procedures of code validation, we propose several contributions for solving some issues arising in computer codes which are both costly and considered as black-box functions. First, we develop a Bayesian testing procedure to detect whether or not a discrepancy function, called code discrepancy, has to be taken into account between the code outputs and the physical system. Second, we present new algorithms for building sequential designs of experiments in order to reduce the error occurring in the calibration process based on a Gaussian process emulator. Lastly, a validation procedure of a thermal code is conducted as the preliminary step of a decision problem where an energy supplier has to commit for an overall energy consumption forecast to customers. Based on the Bayesian decision theory, some optimal plug-in estimators are computed.La validation des codes de calcul a pour but d’évaluer l’incertitude de prédiction d’un système physique à partir d’un code de calcul l’approchant et des mesures physiques disponibles. D’une part, le code peut ne pas être une représentation exacte de la réalité. D’autre part, le code peut être entaché d’une incertitude affectant la valeur de certains de ses paramètres, dont l’estimation est appelée « calage de code ». Après avoir dressé un état de l’art unifié des principales procédures de calage et de validation des codes de calcul, nous proposons plusieurs contributions à ces deux problématiques lorsque le code est appréhendé comme une fonction boîte noire coûteuse. D’abord, nous développons une technique bayésienne de sélection de modèle pour tester l’existence d’une fonction d’erreur entre les réponses du code et le système physique, appelée « erreur de code ». Ensuite, nous présentons de nouveaux algorithmes destinés à la construction de plans d’expériences séquentiels afin de rendre plus précis le calage d’un code de calcul basé sur l’émulation par un processus gaussien. Enfin, nous validons un code de calcul utilisé pour prédire la consommation énergétique d’un bâtiment au cours d’une période de temps. Nous utilisons les résultats de l’étude de validation pour apporter une solution à un problème de statistique décisionnelle dans lequel un fournisseur d’électricité doit s’engager auprès de ses clients sur des prévisions moyennes de consommation. En utilisant la théorie bayésienne de la décision, des estimateurs ponctuels optimaux sont calculés
Adaptive use of replicated Latin Hypercube Designs for computing Sobol’ sensitivity indices
International audienceAs recently pointed out in the field of Global Sensitivity Analysis (GSA) of computer simulations, the use of replicated Latin Hypercube Designs (rLHDs) is a cost-saving alternative to regular Monte Carlo sampling to estimate first-order Sobol’ indices. Indeed, two rLHDs are sufficient to compute the whole set of those indices regardless of the number of input variables. This relies on a permutation trick which, however, only works within the class of estimators called Oracle 2. In the present paper, we show that rLHDs are still beneficial to another class of estimators, called Oracle 1, which often outperforms Oracle 2 for estimating small and moderate indices. Even though unlike Oracle 2 the computation cost of Oracle 1 depends on the input dimension, the permutation trick can be applied to construct an averaged (triple) Oracle 1 estimator whose great accuracy is presented on a numerical example.Thus, we promote an adaptive rLHDs-based Sobol’ sensitivity analysis where the first stage is to compute the whole set of first-order indices by Oracle 2. If needed, the accuracy of small and moderate indices can then be reevaluated by the averaged Oracle 1 estimators. This strategy, cost-saving and guaranteeing the accuracy of estimates, is applied to a computer model from the nuclear field
Sensitivity Analysis on the Critical Mass Flowrate Based on Sobol’ Indices Through Replicated LHS
International audienceIn order to quantify uncertainties of constitutive relationships in thermal-hydraulic system codes, the setup of an experimental database based on Separated Effect Tests (SET) is a critical step. Basically, series of experiments are carried out over various experimental input conditions to study some thermal-hydraulic Quantities of Interests (QoIs). Those quantities usually depend on several constitutive relationships whose individual impact should be assessed by performing a sensitivity analysis.In the paper, a sensitivity analysis based on first-order Sobol’ indices through replicated Latin Hypercube Sampling (rLHS) is proposed to identify the most influential constitutive relationships. Using rLHS allows to strongly reduce the number of simulations needed to compute the whole set of first order Sobol’ indices.A comparison between SET series will be performed. This method is applied to a database composed by two series of SETs in which the critical mass flowratesimulated with the thermal-hydraulic system code CATHARE2 is the QoI. Generally, the relevance of the constitutive relationships of the code on the QoI depends on the experimental input conditions and the geometry of the mock-up. With this study, we can shade such dependences and establish a database based on the impact of the physical models.Finally, the purposes of this analysis are to define which constitutive relationships can be judged as significant and to rank the experiments of the two SETs according to the effect of these constitutive relationships on the mass flowrate
Validation of the system code CATHARE3 on critical flow experiments in the framework of the OECD-NEA ATRIUM project
International audienceWhen applying the Best Estimate Plus Uncertainty (BEPU) methodology for the safety analyses of nuclear reactors, one of the major issues is to quantify the input uncertainties associated to the physical models in thermal-hydraulic codes. A good practice guideline for Inverse Uncertainty Quantification (IUQ) was therefore developed during the OECD-NEA SAPIUM (Development of a Systematic APproach for Input Uncertainty quantification of the physical Models in thermal-hydraulic codes) project in 2020. A first application of this guideline is now carried out within the OECD-NEA ATRIUM (Application Tests for Realization of Inverse Uncertainty quantification and validation Methodologies in thermal-hydraulics) project, which was launched in 2022. The goal is to perform practical IUQ benchmark exercises to evaluate the applicability of the SAPIUM best-practices and suggest possible improvements.In this article, we describe part of the work performed at CEA on the first benchmark exercise on critical flow. In particular, we focus on the validation of the system code CATHARE3 against the available experimental data and the associated sensitivity analyses performed to better understand the simulation results and prepare the IUQ process. The 324 chocked flow experiments come from three different facilities: Sozzi-Sutherland, Super Moby-Dick and Marviken-CFT. The simulations are in very good agreement with the experimental data (maximum discrepancy of 23.3% on the critical flowrate). Based on the sensitivity analyses, two main influential parameters are identified: the wall-to-liquid friction and the flashing models in CATHARE3. The flashing is dominant for relatively short nozzles (L/D ≤ 18). For longer nozzles, the wall-to-liquid friction becomes more and more influential