342 research outputs found
2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation
We report on improvements made over the past two decades to our adaptive
treecode N-body method (HOT). A mathematical and computational approach to the
cosmological N-body problem is described, with performance and scalability
measured up to 256k () processors. We present error analysis and
scientific application results from a series of more than ten 69 billion
() particle cosmological simulations, accounting for
floating point operations. These results include the first simulations using
the new constraints on the standard model of cosmology from the Planck
satellite. Our simulations set a new standard for accuracy and scientific
throughput, while meeting or exceeding the computational efficiency of the
latest generation of hybrid TreePM N-body methods.Comment: 12 pages, 8 figures, 77 references; To appear in Proceedings of SC
'1
White Paper from Workshop on Large-scale Parallel Numerical Computing Technology (LSPANC 2020): HPC and Computer Arithmetic toward Minimal-Precision Computing
In numerical computations, precision of floating-point computations is a key
factor to determine the performance (speed and energy-efficiency) as well as
the reliability (accuracy and reproducibility). However, precision generally
plays a contrary role for both. Therefore, the ultimate concept for maximizing
both at the same time is the minimal-precision computing through
precision-tuning, which adjusts the optimal precision for each operation and
data. Several studies have been already conducted for it so far (e.g.
Precimoniuos and Verrou), but the scope of those studies is limited to the
precision-tuning alone. Hence, we aim to propose a broader concept of the
minimal-precision computing system with precision-tuning, involving both
hardware and software stack.
In 2019, we have started the Minimal-Precision Computing project to propose a
more broad concept of the minimal-precision computing system with
precision-tuning, involving both hardware and software stack. Specifically, our
system combines (1) a precision-tuning method based on Discrete Stochastic
Arithmetic (DSA), (2) arbitrary-precision arithmetic libraries, (3) fast and
accurate numerical libraries, and (4) Field-Programmable Gate Array (FPGA) with
High-Level Synthesis (HLS).
In this white paper, we aim to provide an overview of various technologies
related to minimal- and mixed-precision, to outline the future direction of the
project, as well as to discuss current challenges together with our project
members and guest speakers at the LSPANC 2020 workshop;
https://www.r-ccs.riken.jp/labs/lpnctrt/lspanc2020jan/
GPU fast multipole method with lambda-dynamics features
A significant and computationally most demanding part of molecular dynamics simulations is the calculation of long-range electrostatic interactions. Such interactions can be evaluated directly by the naïve pairwise summation algorithm, which is a ubiquitous showcase example for the compute power of graphics processing units (GPUS). However, the pairwise summation has O(N^2) computational complexity for N interacting particles; thus, an approximation method with a better scaling is required. Today, the prevalent method for such approximation in the field is particle mesh Ewald (PME). PME takes advantage of fast Fourier transforms (FFTS) to approximate the solution efficiently. However, as the underlying FFTS require all-to-all communication between ranks, PME runs into a communication bottleneck. Such communication overhead is negligible only for a moderate parallelization. With increased parallelization, as needed for high-performance applications, the usage of PME becomes unprofitable. Another PME drawback is its inability to perform constant pH simulations efficiently. In such simulations, the protonation states of a protein are allowed to change dynamically during the simulation. The description of this process requires a separate evaluation of the energies for each protonation state. This can not be calculated efficiently with PME as the algorithm requires a repeated FFT for each state, which leads to a linear overhead with respect to the number of states. For a fast approximation of pairwise Coulombic interactions, which does not suffer from PME drawbacks, the Fast Multipole Method (FMM) has been implemented and fully parallelized with CUDA. To assure the optimal FMM performance for diverse MD systems multiple parallelization strategies have been developed. The algorithm has been efficiently incorporated into GROMACS and subsequently tested to determine the optimal FMM parameter set for MD simulations. Finally, the FMM has been incorporated into GROMACS to allow for out-of-the-box electrostatic calculations. The performance of the single-GPU FMM implementation, tested in GROMACS 2019, achieves about a third of highly optimized CUDA PME performance when simulating systems with uniform particle distributions. However, the FMM is expected to outperform PME at high parallelization because the FMM global communication overhead is minimal compared to that of PME. Further, the FMM has been enhanced to provide the energies of an arbitrary number of titratable sites as needed in the constant-pH method. The extension is not fully optimized yet, but the first results show the strength of the FMM for constant pH simulations. For a relatively large system with half a million particles and more than a hundred titratable sites, a straightforward approach to compute alternative energies requires the repetition of a simulation for each state of the sites. The FMM calculates all energy terms only a factor 1.5 slower than a single simulation step. Further improvements of the GPU implementation are expected to yield even more speedup compared to the actual implementation.2021-11-1
- …