56 research outputs found

    Long-Term Simulations of Beam-Beam Dynamics on GPUs

    Get PDF
    Future machines such as the electron-ion colliders (JLEIC), linac-ring machines (eRHIC) or LHeC are particularly sensitive to beam-beam effects. This is the limiting factor for long-term stability and high luminosity reach. The complexity of the non-linear dynamics makes it challenging to perform such simulations which require millions of turns. Until recently, most of the methods used linear approximations and/or tracking for a limited number of turns. We have developed a framework which exploits a massively parallel Graphical Processing Units (GPU) architecture to allow for tracking millions of turns in a sympletic way up to an arbitrary order and colliding them at each turn. The code is called GHOST for GPU-accelerated High-Order Symplectic Tracking. As of now, there is no other code in existence that can accurately model the single-particle non-linear dynamics and the beam-beam effect at the same time for a large enough number of turns required to verify the long-term stability of a collider. Our approach relies on a matrix-based arbitrary-order symplectic particle tracking for beam transport and the Bassetti-Erskine approximation for the beam-beam interaction

    Multi-GPU Accelerated High-Fidelity Simulations of Beam-Beam Effects in Particle Colliders

    Get PDF
    Numerical simulation of beam-beam effects in particle colliders are crucial in understanding and the design of future machines such as electron-ion colliders (JLEIC), linac-ring machines (eRHIC) or LHeC. These simulations model the non-linear collision dynamics of two counter rotating beams in particle colliders for millions of turns. In particular, at each turn, the algorithm simulates the collision of two directed beams propagating at different speeds with different number of bunches each. This leads to non-pair-wise collisions of beams with different number of bunches that results in an increase in the computational load proportional to the number of bunches in the beams. Simulating these collisions for millions of turns using traditional CPUs is challenging due to the complexity in modeling non-linear dynamics of the beams and the need to simulate collision of every bunch in a reasonable amount of time. In this Thesis, we present a high-performance scalable implementation to simulate the beam-beam effects in electron-ion colliders using a cluster of NVIDIA GPUs. The parallel implementation is optimized to minimize the communication overhead and the performance scales near linearly with number of GPUs. Further, the new code enables tracking and collision of the beams for millions of turns, thereby making the previously inaccessible long-term simulations tractable. As of now, there is no other code in existence that can accurately model the single particle non-linear dynamics and the beam-beam effects at the same time for a large enough number of turns required to verify the long-term stability of a collider

    CompF2: Theoretical Calculations and Simulation Topical Group Report

    Full text link
    This report summarizes the work of the Computational Frontier topical group on theoretical calculations and simulation for Snowmass 2021. We discuss the challenges, potential solutions, and needs facing six diverse but related topical areas that span the subject of theoretical calculations and simulation in high energy physics (HEP): cosmic calculations, particle accelerator modeling, detector simulation, event generators, perturbative calculations, and lattice QCD (quantum chromodynamics). The challenges arise from the next generations of HEP experiments, which will include more complex instruments, provide larger data volumes, and perform more precise measurements. Calculations and simulations will need to keep up with these increased requirements. The other aspect of the challenge is the evolution of computing landscape away from general-purpose computing on CPUs and toward special-purpose accelerators and coprocessors such as GPUs and FPGAs. These newer devices can provide substantial improvements for certain categories of algorithms, at the expense of more specialized programming and memory and data access patterns.Comment: Report of the Computational Frontier Topical Group on Theoretical Calculations and Simulation for Snowmass 202

    High-Fidelity Simulations of Long-Term Beam-Beam Dynamics on GPUs

    Get PDF
    Future machines such as the Electron Ion Collider (MEIC), linac-ring machines (eRHIC) or LHeC are particularly sensitive to beam-beam effects. This is the limiting factor for long-term stability and high luminosity reach. The complexity of the non-linear dynamics makes it challenging to perform such simulations typically requiring millions of turns. Until recently, most of the methods have involved using linear approximations and/or tracking for a limited number of turns. We have developed a framework which exploits a massively parallel Graphical Processing Units (GPU) architecture to allow for tracking millions of turns in a sympletic way up to an arbitrary order. The code is called GHOST for GPU-accelerated High-Order Symplectic Tracking. As of now, there is no other code in existence that can accurately model the single-particle non-linear dynamics and the beam-beam effect at the same time for a large enough number of turns necessary to verify the long-term stability of a collider. Our approach relies on a matrix-based arbitrary-order symplectic particle tracking for beam transport and the Bassetti-Erskine approximation for the beam-beam interaction

    Electron-Ion Collider Performance Studies With Beam Synchronization via Gear-Change

    Get PDF
    Beam synchronization of the future electron-ion collider (EIC) is studied with introducing different bunch numbers in the two colliding beams. This allows non-pairwise collisions between the bunches of the two beams and is known as gear-change , whereby one bunch of the first beam collides with all other bunches of the second beam, one at a time. Here we report on the study of how the beam dynamics of the Jefferson Lab Electron Ion collider concept is affected by the gear change. For this study, we use the new GPU-based code (GHOST). It features symplectic one-turn maps for particle tracking and Bassetti-Erskine approach for beam-beam interactions

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Get PDF
    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly-structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-flow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-flow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-flow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-flow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization

    The Future of High Energy Physics Software and Computing

    Full text link
    Software and Computing (S&C) are essential to all High Energy Physics (HEP) experiments and many theoretical studies. The size and complexity of S&C are now commensurate with that of experimental instruments, playing a critical role in experimental design, data acquisition/instrumental control, reconstruction, and analysis. Furthermore, S&C often plays a leading role in driving the precision of theoretical calculations and simulations. Within this central role in HEP, S&C has been immensely successful over the last decade. This report looks forward to the next decade and beyond, in the context of the 2021 Particle Physics Community Planning Exercise ("Snowmass") organized by the Division of Particles and Fields (DPF) of the American Physical Society.Comment: Computational Frontier Report Contribution to Snowmass 2021; 41 pages, 1 figure. v2: missing ref and added missing topical group conveners. v3: fixed typo
    • …
    corecore