1,069 research outputs found
Matrix-free GPU implementation of a preconditioned conjugate gradient solver for anisotropic elliptic PDEs
Many problems in geophysical and atmospheric modelling require the fast
solution of elliptic partial differential equations (PDEs) in "flat" three
dimensional geometries. In particular, an anisotropic elliptic PDE for the
pressure correction has to be solved at every time step in the dynamical core
of many numerical weather prediction models, and equations of a very similar
structure arise in global ocean models, subsurface flow simulations and gas and
oil reservoir modelling. The elliptic solve is often the bottleneck of the
forecast, and an algorithmically optimal method has to be used and implemented
efficiently. Graphics Processing Units have been shown to be highly efficient
for a wide range of applications in scientific computing, and recently
iterative solvers have been parallelised on these architectures. We describe
the GPU implementation and optimisation of a Preconditioned Conjugate Gradient
(PCG) algorithm for the solution of a three dimensional anisotropic elliptic
PDE for the pressure correction in NWP. Our implementation exploits the strong
vertical anisotropy of the elliptic operator in the construction of a suitable
preconditioner. As the algorithm is memory bound, performance can be improved
significantly by reducing the amount of global memory access. We achieve this
by using a matrix-free implementation which does not require explicit storage
of the matrix and instead recalculates the local stencil. Global memory access
can also be reduced by rewriting the algorithm using loop fusion and we show
that this further reduces the runtime on the GPU. We demonstrate the
performance of our matrix-free GPU code by comparing it to a sequential CPU
implementation and to a matrix-explicit GPU code which uses existing libraries.
The absolute performance of the algorithm for different problem sizes is
quantified in terms of floating point throughput and global memory bandwidth.Comment: 18 pages, 7 figure
Toward large-scale Hybrid Monte Carlo simulations of the Hubbard model on graphics processing units
The performance of the Hybrid Monte Carlo algorithm is determined by the
speed of sparse matrix-vector multiplication within the context of
preconditioned conjugate gradient iteration. We study these operations as
implemented for the fermion matrix of the Hubbard model in d+1 space-time
dimensions, and report a performance comparison between a 2.66 GHz Intel Xeon
E5430 CPU and an NVIDIA Tesla C1060 GPU using double-precision arithmetic. We
find speedup factors ranging between 30-350 for d = 1, and in excess of 40 for
d = 3. We argue that such speedups are of considerable impact for large-scale
simulational studies of quantum many-body systems.Comment: 8 pages, 5 figure
- …