324 research outputs found

    Data decomposition of Monte Carlo particle transport simulations via tally servers

    Get PDF
    An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.United States. Dept. of Energy. Naval Reactors Division. Rickover Fellowship Program in Nuclear EngineeringUnited States. Dept. of Energy. Office of Advanced Scientific Computing Research (Contract DE-AC02-06CH11357)United States. Dept. of Energy (Consortium for Advanced Simulation of Light Water Reactors. Contract DE-AC05-00OR22725

    On the use of tally servers in Monte Carlo simulations of light-water reactors

    Get PDF
    An algorithm for decomposing tally data in Monte Carlo simulations using servers has recently been proposed and analyzed. In the present work, we make a number of refinements to a theoretical performance model of the tally server algorithm to better predict the performance of a realistic reactor simulation using Monte Carlo. The impact of subdividing fuel into annular segments on parameters of the performance model is evaluated and shown to result in a predicted overhead of less than 20% for a PWR benchmark on the Mira Blue Gene/Q supercomputer. Additionally, a parameter space study is performed comparing tally server implementations using blocking and non-blocking communication. Non-blocking communication is shown to reduce the communication overhead relative to blocking communication, in some cases resulting in negative overhead.United States. Dept. of Energy. Office of Advanced Scientific Computing Research (Contract DE-AC02-06CH11357

    Monte Carlo domain decomposition for robust nuclear reactor analysis

    Get PDF
    Monte Carlo (MC) neutral particle transport codes are considered the gold-standard for nuclear simulations, but they cannot be robustly applied to high-fidelity nuclear reactor analysis without accommodating several terabytes of materials and tally data. While this is not a large amount of aggregate data for a typical high performance computer, MC methods are only embarrassingly parallel when the key data structures are replicated for each processing element, an approach which is likely infeasible on future machines. The present work explores the use of spatial domain decomposition to make full-scale nuclear reactor simulations tractable with Monte Carlo methods, presenting a simple implementation in a production-scale code. Good performance is achieved for mesh-tallies of up to 2.39 TB distributed across 512 compute nodes while running a full-core reactor benchmark on the Mira Blue Gene/Q supercomputer at the Argonne National Laboratory. In addition, the effects of load imbalances are explored with an updated performance model that is empirically validated against observed timing results. Several load balancing techniques are also implemented to demonstrate that imbalances can be largely mitigated, including a new and efficient way to distribute extra compute resources across finer domain meshes.United States. Dept. of Energy. Center for Exascale Simulation of Advanced Reactor

    Parallel algorithms for Monte Carlo particle transport simulation on exascale computing architectures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 191-199).Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallal efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O([square root]N) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes - in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with measured data from simulations in OpenMC on a full-core benchmark problem. Finally, a novel algorithm for decomposing large tally data was proposed, analyzed, and implemented/tested in OpenMC. The algorithm relies on disjoint sets of compute processes and tally servers. The analysis showed that for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead. Tests were performed on Intrepid and Titan and demonstrated that the algorithm did indeed perform well over a wide range of parameters.by Paul Kollath Romano.Ph.D

    Calculation of kinetic parameters βeff and Λ with modified open source Monte Carlo code OpenMC(TD)

    Get PDF
    This work presents the methodology used to expand the capabilities of the Monte Carlo code OpenMC for the calculation of reactor kinetic parameters: effective delayed neutron fraction βeff and neutron generation time Λ. The modified code, OpenMC(Time-Dependent) or OpenMC(TD), was then used to calculate the effective delayed neutron fraction by using the prompt method, while the neutron generation time was estimated using the pulsed method, fitting Λ to the decay of the neutron population. OpenMC(TD) is intended to serve as an alternative for the estimation of kinetic parameters when licensed codes are not available. The results obtained are compared to experimental data and MCNP calculated values for 18 benchmark configurations.Fil: Romero Barrientos, J.. Comision Chilena de Energia Nuclear; Chile. Universidad de Chile; ChileFil: Marquez Damian, Jose Ignacio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Patagonia Norte; Argentina. European Spallation Source; SueciaFil: Molina, F.. Comision Chilena de Energia Nuclear; Chile. Universidad Andrés Bello; ChileFil: Zambra, M.. Comision Chilena de Energia Nuclear; Chile. Universidad Diego Portales; ChileFil: Aguilera, P.. Comision Chilena de Energia Nuclear; Chile. Universidad de Chile; ChileFil: López Usquiano, F.. Comision Chilena de Energia Nuclear; Chile. Universidad de Chile; ChileFil: Parra, B.. Instituto de Física Corpuscular; EspañaFil: Ruiz, A.. Comision Chilena de Energia Nuclear; Chile. Universidad de Chile; Chil

    A high-fidelity multiphysics system for neutronic, thermalhydraulic and fuel-performance analysis of Light Water Reactors

    Get PDF
    Das Verhalten des Kerns in einem Leichtwasserreaktor (LWR) wird von neutronenphysikalischen, thermohydraulischen und thermomechanischen Phänomenen dominiert. Komplexe Rückkopplungsmechanismen verbinden diese physikalischen Bereiche. Einer der aktuellen Tendenzen in der Reaktorphysik ist daher die Implementierung von Multiphysik-Methoden, die diese Wechselwirkungen erfassen, um eine konsistente Beschreibung des Kerns zu liefern. Ein weiterer wichtiger Arbeitsbereich ist die Entwicklung von High-Fidelity-Rechenprogrammen, die die Modellierungsauflösung erhöhen und starke Vereinfachungen eliminieren, die in räumlich homogenisierten Simulationen verwendet werden. Multiphysik- und High-Fidelity-Methoden sind auf die Verfügbarkeit von Hochleistungsrechnern angewiesen, die die Machbarkeit und den Umfang dieser Art von Simulationen begrenzen. Das Ziel dieser Arbeit ist die Entwicklung eines Multiphysik-Simulationssystems, das in der Lage ist, gekoppelte neutronenphysikalische, thermohydraulische und thermomechanische Analysen von LWR-Kernen mit einer High-Fidelity-Methodik durchzuführen. Um dies zu erreichen, wird die Monte-Carlo-Teilchentransportmethode verwendet, um das Verhalten der neutronenphysikalischen Effekte zu simulieren, ohne auf größere physikalische Näherungen zurückzugreifen. Für die Abbrandrechnungen bezüglich des gesamten Kerns, wird eine gebietsbezogene Datenaufteilung der Partikelverfolgung vorgeschlagen und implementiert. Die Kombination der Monte-Carlo-Methode mit der Thermohydraulik auf Unterkanalebene und eine vollständige Analyse des Brennstoffverhaltens aller Brennstäbe beschreibt eine extrem detaillierte Darstellung des Kerns. Die erforderliche Rechenleistung erreicht die Grenzen aktueller Hochleistungsrechner. Auf der Softwareseite wird ein innovativer objektorientierter Kopplungsansatz verwendet, um die Modularität, Flexibilität und Wartbarkeit des Programms zu erhöhen. Die Genauigkeit dieses gekoppelten Systems von drei Programmen wird mit experimentellen Daten von zwei in Betrieb befindlichen Kraftwerken, einem Pre-Konvoi DWR und dem Temelín II WWER-1000 Reaktor, bewertet. Für diese beiden Fälle werden die Ergebnisse der Abbrandrechnung des gesamten Kerns anhand von Messungen der kritischen Borkonzentration und des Brennstabneutronenflusses validiert. Diese Simulationen dienen der Darstellung der hochmodernen Modellierungsfähigkeiten des entwickelten Werkzeugs und zeigen die Durchführbarkeit dieser Methodik für industrielle Anwendungen

    Development of a coupling approach for multi-physics analyses of fusion reactors

    Get PDF
    An integrated multi-physics coupling system has been developed for fusion reactor systems analyses. This system has an advanced Monte Carlo (MC) modeling approach for converting complex CAD models to MC models with hybrid constructive solid and unstructured mesh geometries, and a high-fidelity coupling approach for data mapping from MC to thermal hydraulics and structural mechanics codes. The system was proven to be reliable, robust and efficient through verification calculations

    Computational Methods in Science and Engineering : Proceedings of the Workshop SimLabs@KIT, November 29 - 30, 2010, Karlsruhe, Germany

    Get PDF
    In this proceedings volume we provide a compilation of article contributions equally covering applications from different research fields and ranging from capacity up to capability computing. Besides classical computing aspects such as parallelization, the focus of these proceedings is on multi-scale approaches and methods for tackling algorithm and data complexity. Also practical aspects regarding the usage of the HPC infrastructure and available tools and software at the SCC are presented
    • …
    corecore