7 research outputs found

    Uncertainty Quantification and Representativity Analysis of the LWR-PROTEUS Phase II Experimental Campaign

    Get PDF
    The LWR-PROTEUS Phase II experimental program was conducted at the Proteus research reactor at the Paul Scherrer Institute (PSI) in the early 2000s. One of its purposes was to gain more insight into the reactivity changes caused by fuel burnup and to develop a sense of confidence in modern codes’ ability to predict these changes. The presented project reexamines the experimental campaign using SHARK-X. SHARK-X is a set of Perl-based tools developed at PSI and built around the lattice physics code CASMO-5. It is used to perform sensitivity analysis (SA), uncertainty quantification (UQ), and representativity analysis (RA). This report discusses how SHARK-X was used to quantify the effect of input uncertainties when modeling the LWR-PROTEUS Phase II experiments and to evaluate the representativity of the experiments to a spent fuel pool of the nuclear power plant Gösgen (KKG)

    Convergence Analysis and Criterion for Data Assimilation with Sensitivities from Monte Carlo Neutron Transport Codes

    Get PDF
    Sensitivity coefficients calculated with Monte Carlo neutron transport codes are subject to statistical fluctuations. The fluctuations affect parameters that are calculated with the sensitivity coefficients. The convergence study presented here describes the effects that statistically uncertain sensitivities have on first-order perturbation theory, uncertainty quan-tification, and data assimilation. The results show that for data assimilation, posterior nuclear data were remarkably uninfluenced by fluctuations in sensitivity mean values and by sensitivity uncertainties. Posterior calculated values computed with first-order perturbation theory showed larger dependence on sensitivity mean-value convergence and small uncertainty arising from the sensitivities' uncertainties. A convergence criterion is proposed for stopping simulations once the sensitivity means are sufficiently converged and their uncertainties are sufficiently small. Employing this criterion economizes computational resources by preventing an excess of particle histories from being used once convergence is achieved. The criterion's advantage is that it circumvents the need to set up the full data assimilation procedure, but is still applicable to data assimilation results

    VERIFICATION OF A REACTOR PHYSICS CALCULATION SCHEME FOR THE CROCUS REACTOR

    Get PDF
    CROCUS is a zero power (100 W) reactor of the Laboratory for Reactor Physics and Systems Behavior (LRS) at the Swiss Federal Institute of Technology, Lausanne (EPFL). It is used for teaching and research purposes. Its modeling has relied so far on diffusion theory and point kinetics for the neutronic analysis and simplified thermal hydraulics models for accident analysis. Recently, an effort has started within the LRS to improve its modeling capabilities, the long term goal being to update the CROCUS Safety Analysis Report (SAR) for improved operational flexibility. The present work is focused on the static neutron analysis of CROCUS through the development and preliminary verification of a 3D nodal simulator (e.g. PARCS) model of the reactor, a methodology typically used in the industry for modeling of Light Water Reactors (LWR). The set of homogenized macroscopic cross-sections needed by the core simulator, referred in this work as nuclear data library, is generated by a Monte Carlo based code (e.g. Serpent). The quantities of interest for the verification of the model are the keff, and the control rod worths. An innovative homogenization approach to generate the nuclear data library is considered due to the irregular radial geometry of the CROCUS reactor. The reference solution is provided by another Monte Carlo code, MCNP5. The uncertainty due to the nuclear data in the keff prediction of Serpent is also investigated and amounts to about 500pcm which covers the deviation from unity of keff prediction by MCNP5 and Serpent for a critical CROCUS configuration. PARCS keff predictions are within 400 pcm of the Serpent results

    Development and Application of Data Assimilation Methods in Reactor Physics

    No full text
    Simulations of nuclear reactor physics can disagree significantly from experimental evidence, even when the most accurate models are used. An important part of this bias from experiment is caused by nuclear data. The nuclear data have inherent uncertainties due to the way they are evaluated, which then propagate to nuclear reactor simulations. This creates a bias and an uncertainty in a predicted reactor parameter like \keff~or the composition of spent fuel. This thesis focuses on data assimilation techniques to ameliorate the effects of nuclear data. Data assimilation takes integral experiments and assimilates them in a Bayesian way to improve simulations. It can also be used to find trends and areas needing improvement in evaluated nuclear data. The research focuses on advancing the data assimilation theory and knowledge used in reactor physics, especially on techniques that require stochastic sampling of the nuclear data. Furthermore, the research takes advantage of rich experimental data available from the Proteus research reactor at the Paul Scherrer Institute. The thesis showed, for the first time, that two methods based on stochastic sampling (called MOCABA and BMC) gave equivalent results to each other and to the traditional method called GLLS. This was corroborated with two independent studies that used different experiments, neutron transport codes, nuclear data, and processing codes. The first study used the JEZEBEL-Pu239 benchmark, the Serpent2 neutron transport code, and NUSS. The second study used reactivity experiments from the LWR-Phase II experiments at Proteus, CASMO-5 for neutron transport, and SHARK-X. While using Serpent2, several questions pertaining to the stochastic uncertainty of its sensitivity coefficients arose. To address these, a new method called eXtended GLLS, or xGLLS, was proposed and tested in the thesis. xGLLS showed that the uncertainties associated with sensitivity coefficients have a negligible effect on the data assimilation as long as the calculated integral parameters themselves were converged. The final study focused on adjusting the fission yields and covariances made by the GEF code with post-irradiation examination experiments from Proteus. The adjustment improved the accuracy of predicted nuclide concentrations in spent fuel and improved the agreement between the GEF fission yields and those of ENDF/B-VIII.0 and JEFF3.3

    Full Core modeling techniques for research reactors with irregular geometries using Serpent and PARCS applied to the CROCUS reactor

    No full text
    This paper summarizes the results of modeling methodologies developed for the zero-power (100 W) teaching and research reactor CROCUS located in the Laboratory for Reactor Physics and Systems Behavior (LRS) at the Swiss Federal Institute of Technology in Lausanne (EPFL). The study gives evidence that the Monte Carlo code Serpent can be used effectively as a lattice physics tool for small reactors. CROCUS' core has an irregular geometry with two fuel zones of different lattice pitches. This and the reactor's small size necessitate the use of nonstandard cross-section homogenization techniques when modeling the full core with a 3D nodal diffusion code (e.g. PARCS). The primary goal of this work is the development of these techniques for steady-state neutronics and future transient neutronics analyses of not only CROCUS, but research reactors in general. In addition, the modeling methods can provide useful insight for analyzing small modular reactor concepts based on light water technology. Static computational models of CROCUS with the codes Serpent and MCNP5 are presented and methodologies are analyzed for using Serpent and SerpentXS to prepare macroscopic homogenized group cross-sections for a pin-by-pin model of CROCUS with PARCS. The most accurate homogenization scheme lead to a difference in terms of k eff of 385 pcm between the Serpent and PARCS model, while the MCNP5 and Serpent models differed in terms of k eff by 13 pcm (within the statistical error of each simulation). Comparisons of the axial power profiles between the Serpent model as a reference and a set of PARCS models using different homogenization techniques showed a consistent root-mean-square deviation of $8%, indicating that the differences are not due to the homogenization technique but rather arise from the definition of the diffusion coefficients produced by Serpent. A comparison of the radial power profiles between the best PARCS model and full-core Serpent model showed largest relative differences in terms of power prediction at the core periphery, which is believed to be the product of the geometry simplifications made, the diffusion coefficients produced by Serpent, and the two-group energy structure used. The worth of a single control rod reproduced in PARCS showed a difference of À33 pcm from its 169 pcm worth simulated in Serpent

    Neutronics modeling of the CROCUS reactor with SERPENT and PARCS codes

    No full text
    This paper reports the methodology used for the neutronic modelling of the CROCUS reactor and discusses the challenges encountered during the process. Full-core steady-state neutronics solutions were computed with the PARCS code. The Serpent Monte Carlo code was used for few-group constant generation. The full-core Serpent model of the reactor was also used as reference for the comparison against PARCS results. The comparison between Serpent and PARCS solutions was successful, achieving good level of agreement for eigenvalue (418 pcm dierence) and control rod reactivity worth (1 pcm dierence). In terms of radial neutron flux profiles, dierences in the inner fuel region were within 5% and 1% for the thermal and fast fluxes respectively. However, in the outer fuel lattice region, dierences were considerably higher due to the mismatch between PARCS nodes and heterogeneous fuel pins. Also, PARCS post-processing for intranodal reconstruction proved to be an eective way to observe heterogeneities within nodes, which cannot be otherwise captured by PARCS solution. Some of the modelling challenges were overcome with the use of transport-corrected diusion coecients and the implementation of albedo boundary conditions. A parametric analysis reflected the importance of the transport correction of diusion coecients for producing good eigenvalues in reactor cores with large neutron leakage

    Artificial Neural Networks as Surrogate Models for Uncertainty Quantification and Data Assimilation in 2-D/3-D Fuel Performance Studies

    No full text
    This paper preliminarily investigates the use of data-driven surrogates for fuel performance codes. The objective is to develop fast-running models that can be used in the frame of uncertainty quantification and data assimilation studies. In particular, data assimilation techniques based on Monte Carlo sampling often require running several thousand, or tens of thousands of calculations. In these cases, the computational requirements can quickly become prohibitive, notably for 2-D and 3-D codes. The paper analyses the capability of artificial neural networks to model the steady-state thermal-mechanics of the nuclear fuel, assuming given released fission gases, swelling, densification and creep. An optimized and trained neural network is then employed on a data assimilation case based on the end of the first ramp of the IFPE Instrumented Fuel Assemblies 432
    corecore