24 research outputs found

    FENDL: A library for fusion research and applications

    Full text link
    The Fusion Evaluated Nuclear Data Library (FENDL) is a comprehensive and validated collection of nuclear cross section data coordinated by the International Atomic Energy Agency (IAEA) Nuclear Data Section (NDS). FENDL assembles the best nuclear data for fusion applications selected from available nuclear data libraries and has been under development for decades. FENDL contains sub-libraries for incident neutron, proton, and deuteron cross sections including general purpose and activation files used for particle transport and nuclide inventory calculations. We describe the history, selection of evaluations for the various sub-libraries (neutron, proton, deuteron) with the focus on transport and reactor dosimetry applications, the processing of the nuclear data for application codes, and the development of the TENDL-2017 library which is the currently recommended activation library for FENDL. We briefly describe the IAEA IRDFF library as the recommended library for dosimetry fusion applications. We also present work on validation of the neutron sub-library using a variety of fusion relevant computational and experimental benchmarks. A variety of cross section libraries are used for the validation work including FENDL-2.1, FENDL-3.1d, FENDL-3.2, ENDF/B-VIII.0, and JEFF-3.2 with the emphasis on the FENDL libraries. The results of the experimental validation showed that the performance of FENDL-3.2b is at least as good and in most cases better than FENDL-2.1. Future work will consider improved evaluations developed by the International Nuclear Data Evaluation Network (INDEN). Additional work will be needed to investigate differences in gas production in structural materials. Covariance matrices need to be updated to support the development of fusion technology. Additional validation work for high-energy neutrons, protons and deuterons, and the activation library will be needed.Comment: 81 pages, 114 figure

    Sensitivity of fusion neutronics to the pre-processing of nuclear data

    Get PDF
    Nuclear data are the foundation of simulation and design in the nuclear industry. The success of commercialising thermonuclear fusion will be based on a set of highly accurate simulations used in design, optimisation and safety analyses. This work focuses on the often overlooked, pre-processing stage of nuclear data. The effect of legacy methods in a fusion context is a concern within the community, but has never been quantified. The sensitivity of fusion neutronics to pre-processing was determined using a set of codes and methods developed as part of this thesis. Legacy pre-processing methods demonstrated a difference between the processed and unprocessed distributions of up to 20%. Simple Monte-Carlo radiation transport simulations exhibited sensitivity within energy distributions for small models (< 5 mfp). Alternative data formats did not improve simulation results sufficiently to justify their implementation. Complex, fusion specific models showed a general insensitivity to the pre-processing when run to the current levels of statistical precision. Future recommendations are to process all future data libraries into the cumulative tabulated probability format. Improved methods are not required at this stage as the core data libraries are incomplete and sometimes inaccurate. Only after the libraries have improved will pre-processing become significant

    Zero power reactors in support of current and future nuclear power systems

    Get PDF
    Zero-power reactors stand as indispensable tools for shaping the future of the nuclear industry. Addressing safety concerns, advancing reactor technology, mitigating proliferation risks, fostering education, and promoting economic viability, these reactors hold the key to unlocking the full potential of nuclear energy in a sustainable and responsible manner. As the world seeks cleaner and more efficient energy solutions, the importance of zero-power reactors cannot be overstated in charting the course for the nuclear industry’s future. The paper presents a short history of the various zero-power/zero-energy experimental facilities constructed and used worldwide. Many of the names seemed to be lost to history and archives, which means that all the experimental data carried in the those facilities is lost as well. However, re-introducing the various names can spark an interest in ”digging up” and revisit experiments of the past, which can help in the design of experiments and new systems in the future. It is clear that a new experimental facility should be built. The next frontier in zero-power reactor design envisions a design for versatility, this future concept addresses diverse energy needs while contributing to a sustainable and responsible nuclear energy landscape. This was demonstrated in the framework of the Zero-power Experimental PHYsics Reactor design proposed by French Atomic Energy Commission

    URANOS v1.0 – the Ultra Rapid Adaptable Neutron-Only Simulation for Environmental Research

    Get PDF
    The understanding of neutron transport by Monte Carlo simulations led to major advancements towards precise interpretation of measurements. URANOS (Ultra Rapid Neutron-Only Simulation) is a free software package which has been developed in the last few years in cooperation with particle physics and environmental sciences, specifically for the purposes of cosmic-ray neutron sensing (CRNS). Its versatile user interface and input/output scheme tailored for CRNS applications offers hydrologists straightforward access to model individual scenarios and to directly perform advanced neutron transport calculations. The geometry can be modeled layer-wise, whereas in each layer a voxel geometry is extruded using a two-dimensional map from pixel images representing predefined materials and allowing for the construction of objects on the basis of pixel graphics without a three-dimensional editor. It furthermore features predefined cosmic-ray neutron spectra and detector configurations and also allows for a replication of important site characteristics of study areas – from a small pond to the catchment scale. The simulation thereby gives precise answers to questions like from which location do neutrons originate? How do they propagate to the sensor? What is the neutron's response to certain environmental changes? In recent years, URANOS has been successfully employed by a number of studies, for example, to calculate the cosmic-ray neutron footprint, signals in complex geometries like mobile applications on roads, urban environments and snow patterns.</p

    Correlated uncertainty arithmetic with application to fusion neutronics

    Get PDF
    his thesis advances the idea of automatic and rigorous uncertainty propagation for computational science. The aim is to replace the deterministic arithmetic and logical operations composing a function or a computer program with their uncertain equivalents. In this thesis, uncertain computer variables are labelled uncertain numbers, which may be probability distributions, intervals, probability boxes, and possibility distributions. The individual models of uncertainty are surveyed in the context of imprecise probability theory, and their individual arithmetic described and developed, with new results presented in each arithmetic. The presented arithmetic framework allows random variables to be imprecisely characterised or partially defined. It is a common situation that input random variables are unknown or that only certain characteristics of the inputs are known. How uncertain numbers can be rigorously represented by a finite numerical discretisation is described. Further, it is shown how arithmetic operations are computed by numerical convolution, accounting for both the error from the input's discretisation and from the numerical integration, yielding guaranteed bounds on computed uncertain numbers. One of the central topics of this thesis is stochastic dependency. Considering complex dependencies amongst uncertain numbers is necessary, as it plays a key role in operations. An arithmetic operation between two uncertain numbers is a function not only of the input numbers, but also how they are correlated. This is often more important than the marginal information. In the presented arithmetic, dependencies between uncertain numbers may also be partially defined or missing entirely. A major proposition of this thesis are methods to propagate dependence information through functions alongside marginal information. The long-term goal is to solve probabilistic problems with partial knowledge about marginal distributions and dependencies using algorithms which were written deterministically. The developed arithmetic frameworks can be used individually, or may be combined into a larger uncertainty computing framework. We present an application of the developed method to a radiation transport algorithm for nuclear fusion neutronics problems

    Fuel processing simulation tool for liquid-fueled nuclear reactors

    Get PDF
    Nuclear reactors with liquid fuel offer multiple advantages over their solid-fueled siblings: improved inherent safety, fuel utilization, thermal efficiency, online reprocessing, and potential for nuclear fuel cycle closure. To advance this promising reactor design, researchers need a simulation tool for fuel depletion calculations while taking into account online reprocessing and refueling. This work presents a flexible, open-source tool, SaltProc, for simulating the fuel depletion in a generic nuclear reactor with liquid, circulating fuel. SaltProc allows the user to define realistically constrained extraction efficiency of fission products based on physical models of fuel processing components appearing in various MSR (Molten Salt Reactor) systems. Developed using a Python Object-Oriented Programming paradigm, SaltProc can model a complex, multi-zone, multi-fluid MSR operation and is sufficiently general to represent myriad reactor systems. Moreover, SaltProc can maintain reactor criticality by adjusting the geometry of the core. Finally, the tool can analyze power variations in the context of depletion. This thesis also demonstrates and validates SaltProc for two prospective reactor designs: the Molten Salt Breeder Reactor (MSBR) and the Transatomic Power (TAP) MSR. A 60-year full-power MSBR depletion calculation with ideal fission product extraction (e.g., 100% of target poison removed) has been validated against Betzler et al. simulation results obtained with ChemTRITON at ORNL. The average 232Th feed rate obtained is the current work is 2.40 kg/d, which is consistent with ORNL results (2.45 kg/d). This simulation showed that the online fission product extraction and online refueling with 232Th allowed the MSBR to operate at full power for 60 years due to exceptionally low parasitic neutron absorption. This work shows fuel depletion simulations with SaltProc for the TAP MSR to demonstrate the tool capability to model liquid-fueled reactors with movable/adjustable moderator. This dissertation also validated depletion calculations for a realistic multi-component model of the fuel salt reprocessing system with assumed ideal extraction efficiency against full-core TAP depletion analysis by Betzler et al. from ORNL. The average SaltProc-calculated 5%-enriched uranium feed rate is 460.8 kg/y, which agrees well with the reference (480 kg/y). This dissertation illuminated the impact of xenon extraction efficiency on the long-term fuel cycle performance for the realistic reprocessing system model of the TAP concept with non-ideal removal efficiency. For limited gas removal efficiency, the fuel salt composition is strongly influenced by the neutron spectrum hardening due to the presence of neutron poisons (135Xe) in the core. Thus, more effective noble gas extraction significantly reduced neutron loss due to parasitic absorption, which led to better fuel utilization and extended core lifetime. Additionally, this work investigated MSR load-following capability through short-term depletion analysis with the power level variation (P) in [0,100%]. Online gas removal significantly improved the load-following capability of the MSBR by reducing xenon poisoning from -1457 pcm to -189 pcm. The TAP MSR demonstrated a negligible xenon poisoning effect even without online gas removal because its neutron energy spectrum is relatively fast throughout its lifetime. This work also analyzed safety parameter (temperature and void coefficient of reactivity, total control rod worth, kinetic parameters) variation during operation using fuel composition evolution obtained with SaltProc. On a lifetime-long timescale, the safety parameters worsened during operation for both considered MSRs due to a significant spectral shift. On a short-term timescale, the safety parameters during MSBR load-following slightly worsened right after power drop because 135Xe concentration peak caused substantial neutron spectrum hardening. However, during the next few hours, the gas removal system removed almost all 135Xe from the fuel, which led to significant improvement in all safety parameters. Overall, a reduced amount of neutron poisons (e.g., 135Xe) due to online gas extraction improved the safety case for both MSR designs. Finally, a simple uncertainty propagation via Monte Carlo depletion calculations in this work showed that the nuclear-data-related error (0.5-8% depending on the nuclide) is two orders of magnitude greater than the stochastic error (<0.07%)

    ESARDA 37th Annual Meeting Proceedings

    Get PDF
    The 37th ESARDA symposium on Safeguards and Nuclear Non-Proliferation was held in Manchester, United Kingdom from 19-21 May, 2015. The Symposium has been preceded by meetings of the ESARDA Working Groups on 18 May 2015. The event has once again been an opportunity for research organisations, safeguards authorities and nuclear plant operators to exchange information on new aspects of international safeguards and non-proliferation, as well as recent developments in nuclear safeguards and non-proliferation related research activities and their implications for the safeguards community. The Proceedings contains the papers (118) submitted according to deadlines.JRC.E.8-Nuclear securit

    Use of shielding integral benchmark archive and database for nucleardata validation

    No full text
    Shielding benchmarks were extensively used for the validation and improvement of nuclear data since many years. Recent evaluations however mostly rely on the validation using critical benchmarks, which can introduce biases and compensation effects. A new Working Party on International Nuclear Data Evaluation Co-operation Subgroup 47 (WPEC SG47) entitled "Use of Shielding Integral Benchmark Archive and Database for Nuclear Data Validation" was started in June 2019 with the objective to promote further development and use of Shielding Integral Benchmark Archive and Database (SINBAD) and thus contribute to the diversification of the nuclear data validation practice by including more extensively other types of integral measurements, such as shielding benchmarks, in the validation and evaluation procedure. Use of shielding benchmarks is expected to provide a wider-scope test of the performance of the evaluated nuclear data and would ultimately contribute to a production of more general-purpose cross-section evaluations

    Probing the early Milky Way with stellar spectroscopy

    Get PDF
    Stars preserve the fossil records of the kinematical and chemical evolution of individual building blocks of the Milky Way. In its efforts to excavate this information, the astronomical community has recently seen the advent of massive astrometric and spectroscopic observing campaigns that are dedicated to gather extensive data for millions of stars. The exploration of these vast datasets is at the heart of the present thesis. First, I introduce ATHOS, a data-driven tool that employs spectral flux ratios for the determination of the fundamental stellar parameters effective temperature, surface gravity, and metallicity, upon which all higher-order parameters like detailed chemical abundances critically rely. ATHOS’ robustness and widespread applicability is not only showcased in a comparison to large-scale spectroscopic surveys and their dedicated pipelines, but it is also demonstrated to be able to compete with highly specialized parameterization methods that are tailored to high-quality data in the realm of studies with low target numbers. An in-depth study of the latter kind is outlined in the second part of this thesis, where I present a chemical abundance investigation of the metal-poor Galactic halo star HD 20. Using spectra and photometric time series of utmost quality in combination with modern asteroseismic and spectroscopic analysis techniques, I deduce a comprehensive, highly accurate, and precise chemical pattern that proves HD 20 worthy of being added to the short list of metal-poor benchmark stars, both for nuclear astrophysics and in terms of stellar parameters. The decomposition of the chemical pattern shows an imprint from s-process nucleosynthesis on top of the already in itself rarely encountered enhancement of r-process elements. In the absence of a companion that could act as polluter, this poses a striking finding that points towards fast and efficient mixing in the early interstellar medium prior to HD 20’s formation. In the third and last part, spectroscopic data from the SDSS/SEGUE surveys are combined with astrometry from the Gaia mission to form a sample of several hundred thousand chemodynamically characterized halo stars that is scrutinized to establish links between globular clusters and the general halo field star population. Based on the identified sample of probable cluster escapees that includes both first-generation and second-generation (former) cluster stars, I provide important observational constraints on the overall cluster contribution to the buildup of the Galactic halo. A highly interesting – yet tentative – finding is that for those populations of stars that were lost early on, the first-generation fraction appears higher compared to groups that are currently being stripped or still bound to clusters. This observation could indicate either a dominant contribution from since dissolved low-mass clusters or that early cluster mass loss preferentially affected first-generation stars

    Particle Physics Reference Library

    Get PDF
    This second open access volume of the handbook series deals with detectors, large experimental facilities and data handling, both for accelerator and non-accelerator based experiments. It also covers applications in medicine and life sciences. A joint CERN-Springer initiative, the “Particle Physics Reference Library” provides revised and updated contributions based on previously published material in the well-known Landolt-Boernstein series on particle physics, accelerators and detectors (volumes 21A,B1,B2,C), which took stock of the field approximately one decade ago. Central to this new initiative is publication under full open access
    corecore