4 research outputs found

    Monte Carlo domain decomposition for robust nuclear reactor analysis

    Get PDF
    Monte Carlo (MC) neutral particle transport codes are considered the gold-standard for nuclear simulations, but they cannot be robustly applied to high-fidelity nuclear reactor analysis without accommodating several terabytes of materials and tally data. While this is not a large amount of aggregate data for a typical high performance computer, MC methods are only embarrassingly parallel when the key data structures are replicated for each processing element, an approach which is likely infeasible on future machines. The present work explores the use of spatial domain decomposition to make full-scale nuclear reactor simulations tractable with Monte Carlo methods, presenting a simple implementation in a production-scale code. Good performance is achieved for mesh-tallies of up to 2.39 TB distributed across 512 compute nodes while running a full-core reactor benchmark on the Mira Blue Gene/Q supercomputer at the Argonne National Laboratory. In addition, the effects of load imbalances are explored with an updated performance model that is empirically validated against observed timing results. Several load balancing techniques are also implemented to demonstrate that imbalances can be largely mitigated, including a new and efficient way to distribute extra compute resources across finer domain meshes.United States. Dept. of Energy. Center for Exascale Simulation of Advanced Reactor

    Progress and Status of the Openmc Monte Carlo Code

    Get PDF
    The present work describes the latest advances and progress in the development of the OpenMC Monte Carlo code, an open-source code originating from the Massachusetts Institute of Technology. First, an overview of the development workflow of OpenMC is given. Various enhancements to the code such as real-time XML input validation, state points, plotting, OpenMP threading, and coarse mesh finite difference acceleration are described.United States. Department of Energy. Naval Reactors Division (Rickover Fellowship Program in Nuclear Engineering)United States. Department of Energy (Consortium for Advanced Simulation of Light Water Reactors. Contract DE-AC05-00OR22725)United States. Department of Energy. Office of Advanced Scientific Computing Research (Contract DE-AC02-06CH11357

    Domain decomposition for Monte Carlo particle transport simulations of nuclear reactors

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2015.Cataloged from PDF version of thesis.Includes bibliographical references (pages 151-158).Monte Carlo (MC) neutral particle transport methods have long been considered the gold-standard for nuclear simulations, but high computational cost has limited their use significantly. However, as we move towards higher-fidelity nuclear reactor analyses the method has become competitive with traditional deterministic transport algorithms for the same level of accuracy, especially considering the inherent parallelism of the method and the ever-increasing concurrency of modern high performance computers. Yet before such analysis can be practical, several algorithmic challenges must be addressed, particularly in regards to the memory requirements of the method. In this thesis, a robust domain decomposition algorithm is proposed to alleviate this, along with models and analysis to support its use for full-scale reactor analysis. Algorithms were implemented in the full-physics Monte Carlo code OpenMC, and tested for a highly-detailed PWR benchmark: BEAVRS. The proposed domain decomposition implementation incorporates efficient algorithms for scalable inter-domain particle communication in a manner that is reproducible with any pseudo-random number seed. Algorithms are also proposed to scalably manage material and tally data with on-the-fly allocation during simulation, along with numerous optimizations required for scalability as the domain mesh is refined and divided among thousands of compute processes. The algorithms were tested on two supercomputers, namely the Mira Blue Gene/Q and the Titan XK7, demonstrating good performance with realistic tallies and materials requiring over a terabyte of aggregate memory. Performance models were also developed to more accurately predict the network and load imbalance penalties that arise from communicating particles between distributed compute nodes tracking different spatial domains. These were evaluated using machine properties and tallied particle movement characteristics, and empirically validated with observed timing results from the new implementation. Network penalties were shown to be almost negligible with per-process particle counts as low as 1000, and load imbalance penalties higher than a factor of four were not observed or predicted for finer domain meshes relevant to reactor analysis. Load balancing strategies were also explored, and intra-domain replication was shown to be very effective at improving parallel efficiencies without adding significant complexity to the algorithm or burden to the user. Performance of the strategy was quantified with a performance model, and shown to agree well with observed timings. Imbalances were shown to be almost completely removed for the finest domain meshes. Finally, full-core studies were carried out to demonstrate the efficacy of domain-decomposed Monte Carlo in tackling the full scope of the problem. A detailed mesh required for a robust depletion treatment was used, and good performance was demonstrated for depletion tallies with 206 nuclides. The largest runs scored six reaction rates for each nuclide in 51M regions for a total aggregate memory requirement of 1.4TB, and particle tracking rates were consistent with those observed for smaller non-domain- decomposed runs with equivalent tally complexity. These types of runs were previously not achievable with traditional Monte Carlo methods, and can be accomplished with domain decomposition with between 1.4x and 1.75x overhead with simple load balancing.by Nicholas Edward Horelik.Ph. D

    Expanding and optimizing fuel management and data analysis capabilities of MCODE-FM in support of Massachusetts Institute of Technology research reactor (MITR-II) LEU conversion

    No full text
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 88-90).Studies are underway in support of the MIT research reactor (MITR-II) conversion from high enriched Uranium (HEU) to low enriched Uranium (LEU), as required by recent non-proliferation policy. With the same core configuration and similar assembly type, high-density monolithic U-Mo fuel will replace the current HEU fuel with comparable performance. Part of the required analysis for relicensing includes detailed fuel management and burnup studies with the new LEU fuel, to be carried out with a recently developed fuel management tool called MCODE-FM. This code-package is a Python wrapper enabling automatic fuel shuffling between successive runs of MIT's MCODE, which couples MCNP with ORIGEN for full-core neutronics and depletion. In this work, the capabilities of MCODE have been expanded, and the effects of depletion mesh parameters have been explored. Several features have been added to the fuel management tool to encompass the the full range of fuel management options needed for detailed analysis, including assembly flipping, rotation, and temporary storage above the core. In addition, an option to easily manage experiments and custom dummy elements has been added, and a parallel version of MCODE for MCODE-FM that better handles finer discretizations of full-core runs has been developed. These changes have been made in the main wrapper utility as well as the graphical user interface (GUI). In addition to the new MCODE-FM capabilities, a suite of automatic data analysis utilities were developed to consistently parse results. These include utilities to extract or calculate isotope data, fission powers, blade heights, peaking factors, and 3D VTK files for visualization at any time step. The suite has been developed as a series of Python scripts, accessible also through the MCODE-FM GUI. Finally, the effects of the spatial discretization parameters for the depletion mesh have been explored, and mesh choice recommendations have been made for different types of studies. In summary, coarser meshes in the radial and lateral dimensions have been found to yield conservative power peaking results, whereas a finer axial mesh is needed axially. Thus for iterative fuel management studies a fast-running depletion mesh of 8 axial regions, 3 radial regions, and 1 lateral region can be used. However, for safety studies and benchmarking that only need to run once or twice, 16 axial regions, 15 or 18 radial regions (HEU or LEU, respectively), and 4 lateral regions should be used.by Nicholas E. Horelik.S.M
    corecore