13 research outputs found

    Burnable Poison Design for the International Reactor, Innovative and Secure (IRIS)

    Get PDF
    The purpose of this research was to create computer models to expedite the core design of the International Reactor, Innovative and Secure (IRIS), specifically, so that it may employ burnable absorbers to achieve a longer cycle length and enhanced safety while minimizing the use of soluble boron. The IRIS is a next-generation, integral pressurized water reactor (PWR) being designed by an international consortium led by Westinghouse Electric. Two series of comparison benchmarks, defined by Westinghouse, were completed to validate computer models of representative pin cell, assembly, and whole core geometries. The models were created using the collision probability code HELIOS and a conversion utility to pass cross sections to NESTLE, a nodal diffusion code. Gadolinium and erbium were chosen as the two best qualified elements to be employed as burnable absorbers. Research was performed to create burnable absorber configurations for assemblies that minimize reactivity swing over their expected lifetimes. These optimal assembly designs were then loaded into a simple full reactor geometry to emulate a two-batch core, and the critical soluble boron letdown curves were calculated. While both gadolinium and erbium cores met the requirements for maximum soluble boron levels, neither configuration satisfies all thermal hydraulic safety margins. Future work will address the optimization of core loadings so that these safety margins are met. This work will contribute to establishing an attractive, safe, and economic core design for the IRIS long cycle

    Advanced Monte Carlo Methods for Thermal Radiation Transport.

    Full text link
    During the past 35 years, the Implicit Monte Carlo (IMC) method proposed by Fleck and Cummings has been the standard Monte Carlo approach to solving the thermal radiative transfer (TRT) equations. However, the IMC equations are known to have accuracy limitations that can produce unphysical solutions. In this thesis, we explicitly provide the IMC equations with a Monte Carlo interpretation by including particle weight as one of its arguments. We also develop and test a stability theory for the 1-D, gray IMC equations applied to a nonlinear problem. We demonstrate that the worst case occurs for 0-D problems, and we extend the results to a stability algorithm that may be used for general linearizations of the TRT equations. We derive gray, Quasidiffusion equations that may be deterministically solved in conjunction with IMC to obtain an inexpensive, accurate estimate of the temperature at the end of the time step. We then define an average temperature T∗T_* to evaluate the temperature-dependent problem data in IMC, and we demonstrate that using T∗T_* is more accurate than using the (traditional) beginning-of-time-step temperature. We also propose an accuracy enhancement to the IMC equations: the use of a time-dependent ``Fleck factor''. This Fleck factor can be considered an automatic tuning of the traditionally defined user parameter alphaalpha, which generally provides more accurate solutions at an increased cost relative to traditional IMC. We also introduce a global weight window that is proportional to the forward scalar intensity calculated by the Quasidiffusion method. This weight window improves the efficiency of the IMC calculation while conserving energy. All of the proposed enhancements are tested in 1-D gray and frequency-dependent problems. These enhancements do not unconditionally eliminate the unphysical behavior that can be seen in the IMC calculations. However, for fixed spatial and temporal grids, they suppress them and clearly work to make the solution more accurate. Overall, the work presented represents first steps along several paths that can be taken to improve the Monte Carlo simulations of TRT problems.Ph.D.Nuclear Engineering and Radiological Sciences and ScientifUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60735/1/wollaber_1.pd

    Developing a Series of AI Challenges for the United States Department of the Air Force

    Full text link
    Through a series of federal initiatives and orders, the U.S. Government has been making a concerted effort to ensure American leadership in AI. These broad strategy documents have influenced organizations such as the United States Department of the Air Force (DAF). The DAF-MIT AI Accelerator is an initiative between the DAF and MIT to bridge the gap between AI researchers and DAF mission requirements. Several projects supported by the DAF-MIT AI Accelerator are developing public challenge problems that address numerous Federal AI research priorities. These challenges target priorities by making large, AI-ready datasets publicly available, incentivizing open-source solutions, and creating a demand signal for dual use technologies that can stimulate further research. In this article, we describe these public challenges being developed and how their application contributes to scientific advances

    Design Tools for Eliminating Borated Water in IRIS

    Get PDF

    A dataset to facilitate automated workflow analysis.

    No full text
    Data sets that provide a ground truth to quantify the efficacy of automated algorithms are rare due to the time consuming and expensive, although highly valuable, task of manually annotating observations. These datasets exist for niche problems in developed fields such as Natural Language Processing (NLP) and Business Process Mining (BPM), however it is difficult to find a suitable dataset for use cases that span across multiple fields, such as the one described in this study. The lack of established ground truth maps between cyberspace and the human-interpretable, persona-driven tasks that occur therein, is one of the principal barriers preventing reliable, automated situation awareness of dynamically evolving events and the consequences of loss due to cybersecurity breaches. Automated workflow analysis-the machine-learning assisted identification of templates of repeated tasks-is the likely missing link between semantic descriptions of mission goals and observable events in cyberspace. We summarize our efforts to establish a ground truth for an email dataset pertaining to the operation of an open source software project. The ground truth defines semantic labels for each email and the arrangement of emails within a sequence that describe actions observed in the dataset. Identified sequences are then used to define template workflows that describe the possible tasks undertaken for a project and their business process model. We present the overall purpose of the dataset, the methodology for establishing a ground truth, and lessons learned from the effort. Finally, we report on the proposed use of the dataset for the workflow discovery problem, and its effect on system accuracy

    Enabling high-fidelity neutron transport simulations on petascale architectures

    No full text
    The UNIC code is being developed as part of the DOE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. UNIC is an unstructured, deterministic neutron transport code that allows a highly detailed description of a nuclear reactor core in our numerical simulations. The goal of our simulation efforts is to reduce the uncertainties and biases in reactor design calculations by progressively replacing existing multi-level averaging (homogenization) techniques with more direct solution methods based on first principles. Since the neutron transport equation is seven dimensional (three in space, two in angle, one in energy, and one in time), these simulations are among the most memory and computationally intensive in all of computational science. To model the complex geometry of a reactor core, billions of spatial elements, hundreds of angles, and thousands of energy groups are necessary, which leads to problem sizes with petascale degrees of freedom. Therefore, these calculations exhaust memory resources on current and even next-generation architectures. In this paper, we present UNIC simulation results for two important representative problems in reactor design/analysis- PHENIX and ZPR. In each case, UNIC shows excellent weak scalability on up to 163,840 cores of BlueGene/P (Argonne) and 131,072 cores of XT5 (ORNL). While our current per processor performance is not ideal, we demonstrate a clear ability to effectivel

    Extensible Machine Learning for Encrypted Network Traffic Application Labeling via Uncertainty Quantification

    Full text link
    With the increasing prevalence of encrypted network traffic, cyber security analysts have been turning to machine learning (ML) techniques to elucidate the traffic on their networks. However, ML models can become stale as known traffic features can shift between networks and as new traffic emerges that is outside of the distribution of the training set. In order to reliably adapt in this dynamic environment, ML models must additionally provide contextualized uncertainty quantification to their predictions, which has received little attention in the cyber security domain. Uncertainty quantification is necessary both to signal when the model is uncertain about which class to choose in its label assignment and when the traffic is not likely to belong to any pre-trained classes. We present a new, public dataset of network traffic that includes labeled, Virtual Private Network (VPN)-encrypted network traffic generated by 10 applications and corresponding to 5 application categories. We also present an ML framework that is designed to rapidly train with modest data requirements and provide both calibrated, predictive probabilities as well as an interpretable ``out-of-distribution'' (OOD) score to flag novel traffic samples. We describe how to compute a calibrated OOD score from p-values of the so-called relative Mahalanobis distance. We demonstrate that our framework achieves an F1 score of 0.98 on our dataset and that it can extend to an enterprise network by testing the model: (1) on data from similar applications, (2) on dissimilar application traffic from an existing category, and (3) on application traffic from a new category. The model correctly flags uncertain traffic and, upon retraining, accurately incorporates the new data. We additionally demonstrate good performance (F1 score of 0.97) when packet sizes are made to be uniform, as occurs for certain encryption protocols.Comment: Paper is 13 pages and has 9 figures. For associated dataset, see https://www.ll.mit.edu/r-d/datasets/vpnnonvpn-network-application-traffic-dataset-vna
    corecore