31 research outputs found

    Data Assimilation using Non-invasive Monte Carlo Sensitivity Analysis of Reactor Kinetics Parameters

    No full text
    Accurately predicting the criticality of an experiment before interacting with the experimental components is very important for criticality safety. Radiation transport software can be utilized to calculate the effective neutron multiplication factor of a nuclear system. Because of the integral nature of the effective neutron multiplication factor, the value calculated contains various sources of nuclear-data induced uncertainty. The sensitivity analysis and data assimilation technique presented in this paper exhibit one possible method of identifying and reducing the effective neutron multiplication factor nuclear-data induced uncertainty. The results presented in this work show that it is possible to use relative sensitivity coefficients of the prompt neutron decay constant and the effective delayed neutron fraction to 239Pu nuclear data to reduce nuclear-data induced uncertainties in the effective neutron ultiplication factor. This work has been utilized by members of the Los Alamos National Laboratory project EUCLID (Experiments Underpinned by Computational Learning for Improvements in Nuclear Data) for optimally designing a new experiment, which will be used to reduce compensating errors in 239Pu nuclear data

    Uncovering Where Compensating Errors Could Hide in ENDF/B-VIII.0

    No full text
    Unconstrained physics spaces between two or more nuclear data observables in a library occur when their values can be simultaneously adjusted without violating the uncertainties in either differential information or simulations of relevant integral experiments. Differential data are often too imprecise to fully bound all nuclear data observables of interest for application simulations. Integral data are simulated with combinations of nuclear data so that an error in one observable may be hidden by a counterbalancing error in another. In this manner compensating errors may lurk within nuclear data libraries and these errors have the potential to undermine the predictive power of neutron transport simulations, particularly in situations where there is no conclusive validation experiment that resembles the application of interest. The EUCLID project (Experiments Underpinned by Computational Learning for Improvements in Nuclear Data) developed a preliminary workflow to identify these unconstrained physics spaces by bringing together results from a large collection of integral experiments with their simulated counter-parts as well as differential information that have a one-to-one correspondence to nuclear data. This wealth of information is processed by machine learning tools for subsequent refinement by human experts. Here, we show how the EUCLID work-flow is executed by applying it first to 239Pu and then to 9Be nuclear data in ENDF/B-VIII.0

    EUCLID: A New Approach to Constrain Nuclear Data via Optimized Validation Experiments using Machine Learning

    No full text
    Compensating errors between several nuclear data observables in a library can adversely impact application simulations. The EUCLID project (Experiments Underpinned by Computational Learning for Improvements in Nuclear Data) set out to first identify where compensating errors could be hiding in our libraries, and then design validation experiments optimized to reduce compensating errors for a chosen set of nuclear data. Adjustment of nuclear data will be performed to assess whether the new experimental data—spanning measurements from multiple responses—successfully reduced compensating errors. The specific target nuclear data for EUCLID are 239Pu fission, inelastic scattering, elastic scattering, capture, nu-bar, and prompt fission neutron spectrum (PFNS). A new experiment has been designed, which will be performed at the National Criticality Experiments Research Center (NCERC)
    corecore