43 research outputs found
Establishment and Application of Diamond Detector Analysis System
Department of Nuclear EngineeringIn this study, a diamond detector in a mixed neutron-photon field of the CROCUS research reactor at ??cole Polytechnique F??d??rale de Lausanne (EPFL) is modeled. Simulations are carried out to analyze pulses from the diamond detector in more detail, which induce a novel discovery. Through a code-to-code comparison, the Monte Carlo codes SERPENT v2.1.29 and GEANT4 v10.04.p02 are selected for the CROCUS whole core calculation and the detailed physics modeling in the diamond crystal, respectively. The neutron and prompt gamma ray contributions to the detector are modeled by a two-step procedure (SERPENT2/GEANT4), and the simulation of the delayed gamma ray contribution is carried out by a three-step procedure (SERPENT2/STREAM-SNF/GEANT4). The simulations show that the fraction of the gamma-to-neutron fluxes in the diamond detector is approximately 91.4%, and that of the delayed-to-prompt gamma fluxes is approximately 47.2%.
By using the flux spectra calculated at the location of the detector, the physics of particle interactions with the diamond crystal is investigated. The contributions of the neutrons and gamma rays to the diamond detector signal amount to approximately 27% and 73%, respectively. The energies and positions of the particles contributing to the detector signal as tallied in GEANT4 are employed to reconstruct numerical pulses and create a scatter plot. In the scatter plot, pulses are arranged according to the energy for each calculation width, which is defined as the width at 0% of the maximum amplitude. The proton recoil plot shows two bands, one due to protons impacting the anode and the other by protons impacting the cathode, thus showing that protons do not have sufficient energy to penetrate the diamond crystal and have the same probability of interacting with the anode and cathode. This tendency also appears as a high-energy tail in a pulse energy spectrum consisting of the number of pulses according to the energy distribution. Meanwhile, neutron scattering collisions have a homogeneous distribution in the crystal. Hence, a structure with a higher count at the ballistic center region (BCR) is observed and is probably related to the amplitude of the BCR pulses being higher. Thus, it is possible to observe better pulses resulting from the energy depositions at the BCR.
Finally, the modeling performance is assessed by comparing the calculated results with the experimental data. In the pulse energy spectrum, a curve produced by the simulations matches with that produced by the measurements. The slope of the curves between 1 MeV and 2 MeV is mainly produced by gamma interactions. The high-energy tail is produced by neutron interactions, especially, the proton recoil. The lithium converter reactions in the diamond detector account for 14.31% and 15.13% beyond 1.34 MeV for the measurement and simulation, respectively, showing consistency.clos
Uncertainty quanti fi cation of PWR spent fuel due to nuclear data and modeling parameters
Uncertainties are calculated for pressurized water reactor (PWR) spent nuclear fuel (SNF) characteristics. The deterministic code STREAM is currently being used as an SNF analysis tool to obtain isotopic in-ventory, radioactivity, decay heat, neutron and gamma source strengths. The SNF analysis capability of STREAM was recently validated. However, the uncertainty analysis is yet to be conducted. To estimate the uncertainty due to nuclear data, STREAM is used to perturb nuclear cross section (XS) and resonance integral (RI) libraries produced by NJOY99. The perturbation of XS and RI involves the stochastic sam-pling of ENDF/B-VII.1 covariance data. To estimate the uncertainty due to modeling parameters (fuel design and irradiation history), surrogate models are built based on polynomial chaos expansion (PCE) and variance-based sensitivity indices (i.e., Sobol & rsquo; indices) are employed to perform global sensitivity analysis (GSA). The calculation results indicate that uncertainty of SNF due to modeling parameters are also very important and as a result can contribute significantly to the difference of uncertainties due to nuclear data and modeling parameters. In addition, the surrogate model offers a computationally effi-cient approach with significantly reduced computation time, to accurately evaluate uncertainties of SNF integral characteristics. (c) 2020 Korean Nuclear Society, Published by Elsevier Korea LLC. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Accuracy Improvement of Boron Meter Adopting New Fitting Function and Multi-detector
This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs) in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.clos
Uncertainty quantification in decay heat calculation of spent nuclear fuel by STREAM/RAST-K
This paper addresses the uncertainty quantification and sensitivity analysis of a depleted light-water fuel assembly of the Turkey Point-3 benchmark. The uncertainty of the fuel assembly decay heat and isotopic densities is quantified with respect to three different groups of diverse parameters: nuclear data, assembly design, and reactor core operation. The uncertainty propagation is conducted using a two-step analysis code system comprising the lattice code STREAM, nodal code RAST-K, and spent nuclear fuel module SNF through the random sampling of microscopic cross-sections, fuel rod sizes, number densities, reactor core total power, and temperature distributions. Overall, the statistical analysis of the calculated samples demonstrates that the decay heat uncertainty decreases with the cooling time. The nuclear data and assembly design parameters are proven to be the largest contributors to the decay heat uncertainty, whereas the reactor core power and inlet coolant temperature have a minor effect. The majority of the decay heat uncertainties are delivered by a small number of isotopes such as 241Am, 137Ba, 244Cm, 238Pu, and 90Y. (c) 2021 Korean Nuclear Society, Published by Elsevier Korea LLC. All rights reserved. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Sensitivity analysis of source intensity and time bin size for the Rossi-alpha method in a numerical reactor model
Until now, various studies focusing on subcriticality measurements have been conducted using the Rossi-alpha method. However, no guidelines have been provided for the source intensity and time bin size of the Rossi-alpha analysis. In this study, sensitivity analyses were performed to determine an optimized source intensity and time bin size for a numerical reactor model. Using the MCNP6 Monte Carlo code, fission count signals in the fuel region were tallied 50 times with different random seeds in real-time mode. These signals were applied to the Rossi-alpha method as detector signals. The estimated k eff values were compared to the reference k eff values generated by the MCNP6 criticality calculation. As a result of the sensitivity test for the neutron source intensity, the Rossi-alpha method has errors of 200 pcm or less when the source intensity is greater than 10 5 s ???1 . It is shown that the standard deviations of 50 analyses are converged when the source intensity is greater than 10 5 s ???1 . At the same time, as a result of the sensitivity test for the time bin size, the Rossi-alpha method shows errors less than 200 pcm when the time bin size is smaller than 10 ???5 s. It is shown that the standard deviations of 50 analyses are converged when the time bins are smaller than 10 ???5 s. For the numerical reactor model in this study, the threshold time bin size was set to 10 ???5 s. However, it generally depends on the neutron generation time of the problem. Thus, it is recommended to set the neutron source intensity to 10 5 s ???1 or higher and the time bins to the time of the neutron generation or less when the Rossi-alpha method is applied to subcriticality measurements in nuclear reactor cores
Feasibility study of noise analysis methods on virtual thermal reactor subcriticality monitoring
This paper presents the analysis results of Rossi-alpha, cross-correlation, Feynman-alpha, and Feynman difference methods applied to the subcriticality monitoring of nuclear reactors. A thermal spectrum Godiva model has been designed for the analysis of the four methods. This Godiva geometry consists of a spherical core containing the isotopes of H-l, U-235 and U-238, and the H 2O reflector outside the core. A Monte Carlo code, McCARD, is used in real time mode to generate virtual detector signals to analyze the feasibility of the four methods. The analysis results indicate that the four methods can be used with high accuracy for the continuous monitoring of subcriticality. In addition to that, in order to analyze the impact of the random noise contamination on the accuracy of the noise analysis, the McCARD-generated signals are contaminated with arbitrary noise. It is noticed that, even when the detector signals are contaminated, the four methods can predict the subcriticality with reasonable accuracy. Nonetheless, in order to reduce the adverse impact of the random noise, eight detector signals, rather than a single signal, are generated from the core, one signal from each equally divided eighth part of the core. The preliminary analysis with multiple virtual detector signals indicates that the approach of using many detectors is promising to improve the accuracy of criticality prediction and further study will be performed in this regard