10 research outputs found

    Analysis of Reactor Simulations Using Surrogate Models.

    Full text link
    The relatively recent abundance of computing resources has driven computational scientists to build more complex and approximation-free computer models of physical phenomenon. Often times, multiple high fidelity computer codes are coupled together in hope of improving the predictive powers of simulations with respect to experimental data. To improve the predictive capacity of computer codes experimental data should be folded back into the parameters processed by the codes through optimization and calibration algorithms. However, application of such algorithms may be prohibitive since they generally require thousands of evaluations of computationally expensive, coupled, multiphysics codes. Surrogates models for expensive computer codes have shown promise towards making optimization and calibration feasible. In this thesis, non-intrusive surrogate building techniques are investigated for their applicability in nuclear engineering applications. Specifically, Kriging and the coupling of the anchored-ANOVA decomposition with collocation are utilized as surrogate building approaches. Initially, these approaches are applied and naively tested on simple reactor applications with analytic solutions. Ultimately, Kriging is applied to construct a surrogate to analyze fission gas release during the Risø AN3 power ramp experiment using the fuel performance modeling code Bison. To this end, Kriging is extended from building surrogates for scalar quantities to entire time series using principal component analysis. A surrogate model is built for fission gas kinetics time series and the true values of relevant parameters are inferred by folding experimental data with the surrogate. Sensitivity analysis is also performed on the fission gas release parameters to gain insight into the underlying physics.PhDNuclear Engineering and Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111485/1/yankovai_1.pd

    Inverse Uncertainty Quantification using the Modular Bayesian Approach based on Gaussian Process, Part 2: Application to TRACE

    Full text link
    Inverse Uncertainty Quantification (UQ) is a process to quantify the uncertainties in random input parameters while achieving consistency between code simulations and physical observations. In this paper, we performed inverse UQ using an improved modular Bayesian approach based on Gaussian Process (GP) for TRACE physical model parameters using the BWR Full-size Fine-Mesh Bundle Tests (BFBT) benchmark steady-state void fraction data. The model discrepancy is described with a GP emulator. Numerical tests have demonstrated that such treatment of model discrepancy can avoid over-fitting. Furthermore, we constructed a fast-running and accurate GP emulator to replace TRACE full model during Markov Chain Monte Carlo (MCMC) sampling. The computational cost was demonstrated to be reduced by several orders of magnitude. A sequential approach was also developed for efficient test source allocation (TSA) for inverse UQ and validation. This sequential TSA methodology first selects experimental tests for validation that has a full coverage of the test domain to avoid extrapolation of model discrepancy term when evaluated at input setting of tests for inverse UQ. Then it selects tests that tend to reside in the unfilled zones of the test domain for inverse UQ, so that one can extract the most information for posterior probability distributions of calibration parameters using only a relatively small number of tests. This research addresses the "lack of input uncertainty information" issue for TRACE physical input parameters, which was usually ignored or described using expert opinion or user self-assessment in previous work. The resulting posterior probability distributions of TRACE parameters can be used in future uncertainty, sensitivity and validation studies of TRACE code for nuclear reactor system design and safety analysis

    Metamodel-based inverse uncertainty quantification of nuclear reactor simulators under the Bayesian framework

    Get PDF
    Mathematical modeling and computer simulations have long been the central technical topics in practically all branches of science and technology. Tremendous progress has been achieved in revealing quantitative connections between numerical predictions and real-world observations. However, because computer models are reduced representations of the real phenomena, there are always discrepancies between ideal in silico designed systems and real-world manufactured ones. As a consequence, uncertainties must be quantified along with the simulation outputs to facilitate optimal design and decision making, ensure robustness, performance or safety margins. Forward uncertainty propagation requires knowledge in the statistical information for computer model random inputs, for example, the mean, variance, Probability Density Functions (PDFs), upper and lower bounds, etc. Historically, ``expert judgment'' or ``user self-evaluation'' have been used to specify the uncertainty information associated with random input parameters. Such ad hoc characterization is unscientific and lacks mathematical rigor. In this thesis, we attempt to solve such ``lack of uncertainty information'' issue with inverse Uncertainty Quantification (UQ). Inverse UQ is the process to seek statistical descriptions of the random input parameters that are consistent with available high-quality experimental data. We formulate the inverse UQ process under the Bayesian framework using the ``model updating equation''. Markov Chain Monte Carlo (MCMC) sampling is applied to explore the posterior distributions and generate samples from which we can extract statistical information for the uncertain input parameters. To greatly alleviate the computational burden during MCMC sampling, we used systematically and rigorously developed metamodels based on stochastic spectral techniques and Gaussian Processes (also known as Kriging) emulators. We demonstrated the developed methodology based on three problems with different levels of sophistication: (1) Point Reactor Kinetics Equation (PRKE) coupled with lumped parameter thermal-hydraulics feedback model based on synthetic experimental data; (2) best-estimate system thermal-hydraulics code TRACE physical model parameters based on OECD/NRC BWR Full-size Fine-Mesh Bundle Tests (BFBT) benchmark steady-state void fraction data; (3) fuel performance code BISON Fission Gas Release (FGR) model based on Risø-AN3 on-line time-dependent FGR measurement data. Metamodels constructed with generalized Polynomial Chaos Expansion (PCE), Sparse Gird Stochastic Collocation (SGSC) and GP were applied respectively for these three problems to replace the full models during MCMC sampling. We proposed an improved modular Bayesian approach that can avoid extrapolating the model discrepancy that is learnt from the inverse UQ domain to the validation/prediction domain. The improved approach is organized in a structure such that the posteriors achieved with data in inverse UQ domain is informed by data in the validation domain. Therefore, over-fitting can be avoided while extrapolation is not required. A sequential approach was also developed for test source allocation (TSA) for inverse UQ and validation. This sequential TSA methodology first select tests for validation that has a full coverage of the test domain to avoid extrapolation of model discrepancy term when evaluated at input setting of tests for inverse UQ. Then it select tests that tend to reside in the unfilled zones of the test domain for inverse UQ, so that inverse UQ can extract the most information for posteriors of calibration parameters using only a relatively small number of tests. The inverse UQ process successfully quantified the uncertainties associated with input parameters that are consistent with the experimental observations. The quantified uncertainties are necessary for future uncertainty and sensitivity study of nuclear reactor simulators in system design and safety analysis. We applied and extended several advanced metamodeling approaches to nuclear engineering practice to greatly reduce the computational cost. The current research bridges the gap between models and data by solving ``lack of uncertainty information'' issue, as well as providing guidance for improving nuclear reactor simulators through the validation process

    Inverse Uncertainty Quantification using the Modular Bayesian Approach based on Gaussian Process, Part 1: Theory

    Full text link
    In nuclear reactor system design and safety analysis, the Best Estimate plus Uncertainty (BEPU) methodology requires that computer model output uncertainties must be quantified in order to prove that the investigated design stays within acceptance criteria. "Expert opinion" and "user self-evaluation" have been widely used to specify computer model input uncertainties in previous uncertainty, sensitivity and validation studies. Inverse Uncertainty Quantification (UQ) is the process to inversely quantify input uncertainties based on experimental data in order to more precisely quantify such ad-hoc specifications of the input uncertainty information. In this paper, we used Bayesian analysis to establish the inverse UQ formulation, with systematic and rigorously derived metamodels constructed by Gaussian Process (GP). Due to incomplete or inaccurate underlying physics, as well as numerical approximation errors, computer models always have discrepancy/bias in representing the realities, which can cause over-fitting if neglected in the inverse UQ process. The model discrepancy term is accounted for in our formulation through the "model updating equation". We provided a detailed introduction and comparison of the full and modular Bayesian approaches for inverse UQ, as well as pointed out their limitations when extrapolated to the validation/prediction domain. Finally, we proposed an improved modular Bayesian approach that can avoid extrapolating the model discrepancy that is learnt from the inverse UQ domain to the validation/prediction domain.Comment: 27 pages, 10 figures, articl

    Computational methods for system optimization under uncertainty

    Get PDF
    In this paper, Subproblems A, B and C of the NASA Langley Uncertainty Quantification (UQ) Challenge on Optimization Under Uncertainty are addressed. Subproblem A deals with the model calibration and (aleatory and epistemic) uncertainty quantification of a subsystem, where a characterization of the parameters of the subsystem is sought by resorting to a limited number (100) of observations. Bayesian inversion is here proposed to address this task. Subproblem B requires the identification and ranking of those (epistemic) parameters that are more effective in improving the predictive ability of the computational model of the subsystem (and, thus, that deserve a refinement in their uncertainty model). Two approaches are here compared: the first is based on a sensitivity analysis within a factor prioritization setting, whereas the second employs the Energy Score (ES) as a multivariate generalization of the Continuous Rank Predictive Score (CRPS). Since the output of the subsystem is a function of time, both subproblems are addressed in the space defined by the orthonormal bases resulting from a Singular Value Decomposition (SVD) of the subsystem observations: in other words, a multivariate dynamic problem in the real domain is translated into a multivariate static problem in the SVD space. Finally, Subproblem C requires identifying the (epistemic) reliability (resp., failure probability) bounds of a given system design point. The issue is addressed by an efficient combination of: (i) Monte Carlo Simulation (MCS) to propagate the aleatory uncertainty described by probability distributions; and (ii) Genetic Algorithms (GAs) to solve the optimization problems related to the propagation of epistemic uncertainty by interval analysis

    Bayesian Uncertainty Quantification of Physical Models in Thermal-Hydraulics System Codes

    Get PDF
    Nuclear thermal-hydraulics (TH) system codes use several parametrized physical or empirical models to describe complex two-phase flow phenomena. The reliability of their predictions is as such primarily affected by the uncertainty associated with the parameters of the models. Because these model parameters often cannot be measured, nor have inherent physical meanings, their uncertainties are mostly based on expert judgment. The present doctoral research aims to quantify the uncertainty of physical model parameters implemented in a TH system code based on experimental data. Specifically, this thesis develops a methodology to use experimental data to inform these uncertainties in a more objective manner. The methodology is based on a probabilistic framework and consists of three steps adapted from recent developments in applied statistics: global sensitivity analysis (GSA), metamodeling, and Bayesian calibration. The methodology is applied to reflood experiments from the FEBA separate effect test facility (SETF), which are modeled with the TH system code TRACE. Reflood is chosen as a relevant phenomenon for the safety analysis of light water reactors (LWRs) and three typical time-dependent outputs are investigated: clad temperature, pressure drops and liquid carryover. First, GSA allows screening out input parameters that have a low impact on the reflood transient. Functional data analysis (FDA) is then used to reduce the dimensionality of the time-dependent code outputs, while preserving their interpretability. The resulting quantities can be used once more with GSA to investigate, quantitatively, the effect of the input parameters on the overall time-dependent outputs. Second, a Gaussian process (GP) metamodel is developed and validated as a surrogate for the TRACE model. The average prediction error of the metamodel is sufficiently low to predict all considered outputs, and its computational cost is less than 5 [s] as compared to 6â15 [min] per TRACE run. Third and finally, the a posteriori model parameter uncertainties are quantified by calibration on a selected test from the FEBA experiments. Several posterior probability density functions (PDFs) corresponding to different calibration schemes â with and without model bias term and for different types of output â are formulated and directly sampled using a Markov Chain Monte Carlo (MCMC) ensemble sampler and the GP metamodel. The posterior samples are then propagated in a set of FEBA experiments to check the validity of the posterior model parameter values and uncertainties. The calibration is performed on different types of output to inform model parameters that would have otherwise remained non-identifiable. The calibration scheme with model bias term is able to constrain the prior uncertainties of the model parameters while keeping the nominal TRACE parameters values within the posterior uncertainty interval. That is in contrast with the results of the calibration without model bias term, in which the posterior uncertainties are concentrated on either side of the prior range, and at times do not include the nominal TRACE parameters values. The methodology was shown to successfully inform the uncertainty of the model parameters involved in a reflood transient. In the future, the methodology can be applied to model parameters involved in other TH phenomena using data from SETFs and contributes to achieve the goal of quantifying uncertainties in the safety assessment of LWRs

    K-State graduate catalog, 1997-1999

    Get PDF
    Course catalogs were published under the following titles: Catalogue of the officers and students of the Kansas State Agricultural College, with a brief history of the institution, 1st (1863/4); Annual catalogue of the officers and students of the Kansas State Agricultural College for, 2nd (1864/5)-4th (1868/9); Catalogue of the officers and students of the Kansas State Agricultural College for the year, 1869-1871/2; Hand-book of the Kansas State Agricultural College, Manhattan, Kansas, 1873/4; Biennial catalogue of the Kansas State Agricultural College, Manhattan, Kansas, calendar years, 1875/77; Catalogue of the State Agricultural College of Kansas, 1877/80-1896/97; Annual catalogue of the officers, students and graduates of the Kansas State Agricultural College, Manhattan, 35th (1897/98)-46th (1908/09); Catalogue, 47th (1909/10)-67th (1929/30); Complete catalogue number, 68th (1930/31)-81st (1943/1944); Catalogue, 1945/1946-1948/1949?; General catalogue, 1949/1950?-1958/1960; General catalog, 1960/1962-1990/1992. Course catalogs then split into undergraduate and graduate catalogs respectively: K-State undergraduate catalog, 1992/1994- ; K-State graduate catalog, 1993/1995-Citation: Kansas State University. (1997). K-State graduate catalog, 1997-1999. Manhattan, KS: Kansas State University.Call number: LD2668.A11711 K78
    corecore