623 research outputs found
Using AVX2 Instruction Set to Increase Performance of High Performance Computing Code
In this paper we discuss new Intel instruction extensions - Intel Advance Vector Extensions 2 (AVX2) and what these bring to high performance computing (HPC). To illustrate this new systems utilizing AVX2 are evaluated to demonstrate how to effectively exploit AVX2 for HPC types of the code and expose the situation when AVX2 might not be the most effective way to increase performance
An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes
In current computer architectures, data movement (from die to network) is by
far the most energy consuming part of an algorithm (10pJ/word on-die to
10,000pJ/word on the network). To increase memory locality at the hardware
level and reduce energy consumption related to data movement, future exascale
computers tend to use more and more cores on each compute nodes ("fat nodes")
that will have a reduced clock speed to allow for efficient cooling. To
compensate for frequency decrease, machine vendors are making use of long SIMD
instruction registers that are able to process multiple data with one
arithmetic operator in one clock cycle. SIMD register length is expected to
double every four years. As a consequence, Particle-In-Cell (PIC) codes will
have to achieve good vectorization to fully take advantage of these upcoming
architectures. In this paper, we present a new algorithm that allows for
efficient and portable SIMD vectorization of current/charge deposition routines
that are, along with the field gathering routines, among the most time
consuming parts of the PIC algorithm. Our new algorithm uses a particular data
structure that takes into account memory alignement constraints and avoids
gather/scatter instructions that can significantly affect vectorization
performances on current CPUs. The new algorithm was successfully implemented in
the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256
bits wide data registers). Results show a factor of to
speed-up in double precision for particle shape factor of order to . The
new algorithm can be applied as is on future KNL (Knights Landing)
architectures that will include AVX-512 instruction sets with 512 bits register
lengths (8 doubles/16 singles).Comment: 36 pages, 5 figure
GeantV: Results from the prototype of concurrent vector particle transport simulation in HEP
Full detector simulation was among the largest CPU consumer in all CERN
experiment software stacks for the first two runs of the Large Hadron Collider
(LHC). In the early 2010's, the projections were that simulation demands would
scale linearly with luminosity increase, compensated only partially by an
increase of computing resources. The extension of fast simulation approaches to
more use cases, covering a larger fraction of the simulation budget, is only
part of the solution due to intrinsic precision limitations. The remainder
corresponds to speeding-up the simulation software by several factors, which is
out of reach using simple optimizations on the current code base. In this
context, the GeantV R&D project was launched, aiming to redesign the legacy
particle transport codes in order to make them benefit from fine-grained
parallelism features such as vectorization, but also from increased code and
data locality. This paper presents extensively the results and achievements of
this R&D, as well as the conclusions and lessons learnt from the beta
prototype.Comment: 34 pages, 26 figures, 24 table
- …