4,056 research outputs found
Performance analysis of direct N-body algorithms for astrophysical simulations on distributed systems
We discuss the performance of direct summation codes used in the simulation
of astrophysical stellar systems on highly distributed architectures. These
codes compute the gravitational interaction among stars in an exact way and
have an O(N^2) scaling with the number of particles. They can be applied to a
variety of astrophysical problems, like the evolution of star clusters, the
dynamics of black holes, the formation of planetary systems, and cosmological
simulations. The simulation of realistic star clusters with sufficiently high
accuracy cannot be performed on a single workstation but may be possible on
parallel computers or grids. We have implemented two parallel schemes for a
direct N-body code and we study their performance on general purpose parallel
computers and large computational grids. We present the results of timing
analyzes conducted on the different architectures and compare them with the
predictions from theoretical models. We conclude that the simulation of star
clusters with up to a million particles will be possible on large distributed
computers in the next decade. Simulating entire galaxies however will in
addition require new hybrid methods to speedup the calculation.Comment: 22 pages, 8 figures, accepted for publication in Parallel Computin
Towards a Mini-App for Smoothed Particle Hydrodynamics at Exascale
The smoothed particle hydrodynamics (SPH) technique is a purely Lagrangian
method, used in numerical simulations of fluids in astrophysics and
computational fluid dynamics, among many other fields. SPH simulations with
detailed physics represent computationally-demanding calculations. The
parallelization of SPH codes is not trivial due to the absence of a structured
grid. Additionally, the performance of the SPH codes can be, in general,
adversely impacted by several factors, such as multiple time-stepping,
long-range interactions, and/or boundary conditions. This work presents
insights into the current performance and functionalities of three SPH codes:
SPHYNX, ChaNGa, and SPH-flow. These codes are the starting point of an
interdisciplinary co-design project, SPH-EXA, for the development of an
Exascale-ready SPH mini-app. To gain such insights, a rotating square patch
test was implemented as a common test simulation for the three SPH codes and
analyzed on two modern HPC systems. Furthermore, to stress the differences with
the codes stemming from the astrophysics community (SPHYNX and ChaNGa), an
additional test case, the Evrard collapse, has also been carried out. This work
extrapolates the common basic SPH features in the three codes for the purpose
of consolidating them into a pure-SPH, Exascale-ready, optimized, mini-app.
Moreover, the outcome of this serves as direct feedback to the parent codes, to
improve their performance and overall scalability.Comment: 18 pages, 4 figures, 5 tables, 2018 IEEE International Conference on
Cluster Computing proceedings for WRAp1
QuEST and High Performance Simulation of Quantum Computers
We introduce QuEST, the Quantum Exact Simulation Toolkit, and compare it to
ProjectQ, qHipster and a recent distributed implementation of Quantum++. QuEST
is the first open source, OpenMP and MPI hybridised, GPU accelerated simulator
of universal quantum circuits. Embodied as a C library, it is designed so that
a user's code can be deployed seamlessly to any platform from a laptop to a
supercomputer. QuEST is capable of simulating generic quantum circuits of
general single-qubit gates and multi-qubit controlled gates, on pure and mixed
states, represented as state-vectors and density matrices, and under the
presence of decoherence. Using the ARCUS Phase-B and ARCHER supercomputers, we
benchmark QuEST's simulation of random circuits of up to 38 qubits, distributed
over up to 2048 compute nodes, each with up to 24 cores. We directly compare
QuEST's performance to ProjectQ's on single machines, and discuss the
differences in distribution strategies of QuEST, qHipster and Quantum++. QuEST
shows excellent scaling, both strong and weak, on multicore and distributed
architectures.Comment: 8 pages, 8 figures; fixed typos; updated QuEST URL and fixed typo in
Fig. 4 caption where ProjectQ and QuEST were swapped in speedup subplot
explanation; added explanation of simulation algorithm, updated bibliography;
stressed technical novelty of QuEST; mentioned new density matrix suppor
- …