1,185 research outputs found

    Monte Carlo domain decomposition for robust nuclear reactor analysis

    Get PDF
    Monte Carlo (MC) neutral particle transport codes are considered the gold-standard for nuclear simulations, but they cannot be robustly applied to high-fidelity nuclear reactor analysis without accommodating several terabytes of materials and tally data. While this is not a large amount of aggregate data for a typical high performance computer, MC methods are only embarrassingly parallel when the key data structures are replicated for each processing element, an approach which is likely infeasible on future machines. The present work explores the use of spatial domain decomposition to make full-scale nuclear reactor simulations tractable with Monte Carlo methods, presenting a simple implementation in a production-scale code. Good performance is achieved for mesh-tallies of up to 2.39 TB distributed across 512 compute nodes while running a full-core reactor benchmark on the Mira Blue Gene/Q supercomputer at the Argonne National Laboratory. In addition, the effects of load imbalances are explored with an updated performance model that is empirically validated against observed timing results. Several load balancing techniques are also implemented to demonstrate that imbalances can be largely mitigated, including a new and efficient way to distribute extra compute resources across finer domain meshes.United States. Dept. of Energy. Center for Exascale Simulation of Advanced Reactor

    Monte Carlo and Depletion Reactor Analysis for High-Performance Computing Applications

    Get PDF
    This dissertation discusses the research and development for a coupled neutron trans- port/isotopic depletion capability for use in high-preformance computing applications. Accurate neutronics modeling and simulation for \real reactor problems has been a long sought after goal in the computational community. A complementary \stretch goal to this is the ability to perform full-core depletion analysis and spent fuel isotopic characterization. This dissertation thus presents the research and development of a coupled Monte Carlo transport/isotopic depletion implementation with the Exnihilo framework geared for high-performance computing architectures to enable neutronics analysis for full-core reactor problems. An in-depth case study of the current state of Monte Carlo neutron transport with respect to source sampling, source convergence, uncertainty underprediction and biases associated with localized tallies in Monte Carlo eigenvalue calculations was performed using MCNPand KENO. This analysis is utilized in the design and development of the statistical algorithms for Exnihilo\u27s Monte Carlo framework, Shift. To this end, a methodology has been developed in order to perform tally statistics in domain decomposed environments. This methodology has been shown to produce accurate tally uncertainty estimates in domain-decomposed environments without a significant increase in the memory requirements, processor-to-processor communications, or computational biases. With the addition of parallel, domain-decomposed tally uncertainty estimation processes, a depletion package was developed for the Exnihilo code suite to utilize the depletion capabilities of the Oak Ridge Isotope GENeration code. This interface was designed to be transport agnostic, meaning that it can be used by any of the reactor analysis packages within Exnihilo such as Denovo or Shift. Extensive validation and testing of the ORIGEN interface and coupling with the Shift Monte Carlo transport code is performed within this dissertation, and results are presented for the calculated eigenvalues, material powers, and nuclide concentrations for the depleted materials. These results are then compared to ORIGEN and TRITON depletion calculations, and analysis shows that the Exnihilo transport-depletion capability is in good agreement with these codes

    Data decomposition of Monte Carlo particle transport simulations via tally servers

    Get PDF
    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.United States. Dept. of Energy. Naval Reactors Division. Rickover Fellowship Program in Nuclear EngineeringUnited States. Dept. of Energy. Office of Advanced Scientific Computing Research (Contract DE-AC02-06CH11357)United States. Dept. of Energy (Consortium for Advanced Simulation of Light Water Reactors. Contract DE-AC05-00OR22725

    On the use of tally servers in Monte Carlo simulations of light-water reactors

    Get PDF
    An algorithm for decomposing tally data in Monte Carlo simulations using servers has recently been proposed and analyzed. In the present work, we make a number of refinements to a theoretical performance model of the tally server algorithm to better predict the performance of a realistic reactor simulation using Monte Carlo. The impact of subdividing fuel into annular segments on parameters of the performance model is evaluated and shown to result in a predicted overhead of less than 20% for a PWR benchmark on the Mira Blue Gene/Q supercomputer. Additionally, a parameter space study is performed comparing tally server implementations using blocking and non-blocking communication. Non-blocking communication is shown to reduce the communication overhead relative to blocking communication, in some cases resulting in negative overhead.United States. Dept. of Energy. Office of Advanced Scientific Computing Research (Contract DE-AC02-06CH11357

    Progress and Status of the Openmc Monte Carlo Code

    Get PDF
    The present work describes the latest advances and progress in the development of the OpenMC Monte Carlo code, an open-source code originating from the Massachusetts Institute of Technology. First, an overview of the development workflow of OpenMC is given. Various enhancements to the code such as real-time XML input validation, state points, plotting, OpenMP threading, and coarse mesh finite difference acceleration are described.United States. Department of Energy. Naval Reactors Division (Rickover Fellowship Program in Nuclear Engineering)United States. Department of Energy (Consortium for Advanced Simulation of Light Water Reactors. Contract DE-AC05-00OR22725)United States. Department of Energy. Office of Advanced Scientific Computing Research (Contract DE-AC02-06CH11357

    Parallel algorithms for Monte Carlo particle transport simulation on exascale computing architectures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 191-199).Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallal efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O([square root]N) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes - in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters.by Paul Kollath Romano.Ph.D

    A high-fidelity multiphysics system for neutronic, thermalhydraulic and fuel-performance analysis of Light Water Reactors

    Get PDF
    Das Verhalten des Kerns in einem Leichtwasserreaktor (LWR) wird von neutronenphysikalischen, thermohydraulischen und thermomechanischen Phänomenen dominiert. Komplexe Rückkopplungsmechanismen verbinden diese physikalischen Bereiche. Einer der aktuellen Tendenzen in der Reaktorphysik ist daher die Implementierung von Multiphysik-Methoden, die diese Wechselwirkungen erfassen, um eine konsistente Beschreibung des Kerns zu liefern. Ein weiterer wichtiger Arbeitsbereich ist die Entwicklung von High-Fidelity-Rechenprogrammen, die die Modellierungsauflösung erhöhen und starke Vereinfachungen eliminieren, die in räumlich homogenisierten Simulationen verwendet werden. Multiphysik- und High-Fidelity-Methoden sind auf die Verfügbarkeit von Hochleistungsrechnern angewiesen, die die Machbarkeit und den Umfang dieser Art von Simulationen begrenzen. Das Ziel dieser Arbeit ist die Entwicklung eines Multiphysik-Simulationssystems, das in der Lage ist, gekoppelte neutronenphysikalische, thermohydraulische und thermomechanische Analysen von LWR-Kernen mit einer High-Fidelity-Methodik durchzuführen. Um dies zu erreichen, wird die Monte-Carlo-Teilchentransportmethode verwendet, um das Verhalten der neutronenphysikalischen Effekte zu simulieren, ohne auf größere physikalische Näherungen zurückzugreifen. Für die Abbrandrechnungen bezüglich des gesamten Kerns, wird eine gebietsbezogene Datenaufteilung der Partikelverfolgung vorgeschlagen und implementiert. Die Kombination der Monte-Carlo-Methode mit der Thermohydraulik auf Unterkanalebene und eine vollständige Analyse des Brennstoffverhaltens aller Brennstäbe beschreibt eine extrem detaillierte Darstellung des Kerns. Die erforderliche Rechenleistung erreicht die Grenzen aktueller Hochleistungsrechner. Auf der Softwareseite wird ein innovativer objektorientierter Kopplungsansatz verwendet, um die Modularität, Flexibilität und Wartbarkeit des Programms zu erhöhen. Die Genauigkeit dieses gekoppelten Systems von drei Programmen wird mit experimentellen Daten von zwei in Betrieb befindlichen Kraftwerken, einem Pre-Konvoi DWR und dem Temelín II WWER-1000 Reaktor, bewertet. Für diese beiden Fälle werden die Ergebnisse der Abbrandrechnung des gesamten Kerns anhand von Messungen der kritischen Borkonzentration und des Brennstabneutronenflusses validiert. Diese Simulationen dienen der Darstellung der hochmodernen Modellierungsfähigkeiten des entwickelten Werkzeugs und zeigen die Durchführbarkeit dieser Methodik für industrielle Anwendungen

    SimpleMOC - A performance abstraction for 3D MOC

    Get PDF
    The method of characteristics (MOC) is a popular method for efficiently solving two-dimensional reactor problems. Extensions to three dimensions have been attempted with mitigated success bringing into question the ability of performing efficient full core three-dimensional (3D) analysis. Although the 3D problem presents many computational difficulties, some simplifications can be made that allow for more efficient computation. In this investigation, we present SimpleMOC, a “mini-app” which mimics the computational performance of a full 3D MOC solver without involving the full physics perspective, allowing for a more straightforward analysis of the computational challenges. A variety of simplifications are implemented that are intended to increase the computational feasibility, including the formation axially-quadratic neutron sources. With the addition of the quadratic approximation to the neutron source, 3D MOC is cast as a CPU-intensive method with the potential for remarkable scalability on next generation computing architectures.United States. Dept. of Energy. Office of Nuclear Energy (Nuclear Energy University Programs Fellowship)United States. Dept. of Energy. Center for Exascale Simulation of Advanced ReactorUnited States. Dept. of Energy. Office of Advanced Scientific Computing Research (Contract DE-AC02-06CH11357

    A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Get PDF
    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures. Keywords: Method of Characteristics; Neutron transport; Reactor simulation; High performance computingUnited States. Department of Energy (Contract DE-AC02-06CH11357
    • …
    corecore