3,191 research outputs found
A bibliography on parallel and vector numerical algorithms
This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also
A Massively Parallel Algorithm for the Approximate Calculation of Inverse p-th Roots of Large Sparse Matrices
We present the submatrix method, a highly parallelizable method for the
approximate calculation of inverse p-th roots of large sparse symmetric
matrices which are required in different scientific applications. We follow the
idea of Approximate Computing, allowing imprecision in the final result in
order to be able to utilize the sparsity of the input matrix and to allow
massively parallel execution. For an n x n matrix, the proposed algorithm
allows to distribute the calculations over n nodes with only little
communication overhead. The approximate result matrix exhibits the same
sparsity pattern as the input matrix, allowing for efficient reuse of allocated
data structures.
We evaluate the algorithm with respect to the error that it introduces into
calculated results, as well as its performance and scalability. We demonstrate
that the error is relatively limited for well-conditioned matrices and that
results are still valuable for error-resilient applications like
preconditioning even for ill-conditioned matrices. We discuss the execution
time and scaling of the algorithm on a theoretical level and present a
distributed implementation of the algorithm using MPI and OpenMP. We
demonstrate the scalability of this implementation by running it on a
high-performance compute cluster comprised of 1024 CPU cores, showing a speedup
of 665x compared to single-threaded execution
Lanczos eigensolution method for high-performance computers
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors
Parallel eigensolvers in plane-wave Density Functional Theory
We consider the problem of parallelizing electronic structure computations in
plane-wave Density Functional Theory. Because of the limited scalability of
Fourier transforms, parallelism has to be found at the eigensolver level. We
show how a recently proposed algorithm based on Chebyshev polynomials can scale
into the tens of thousands of processors, outperforming block conjugate
gradient algorithms for large computations
Chebyshev polynomial filtered subspace iteration in the Discontinuous Galerkin method for large-scale electronic structure calculations
The Discontinuous Galerkin (DG) electronic structure method employs an
adaptive local basis (ALB) set to solve the Kohn-Sham equations of density
functional theory (DFT) in a discontinuous Galerkin framework. The adaptive
local basis is generated on-the-fly to capture the local material physics, and
can systematically attain chemical accuracy with only a few tens of degrees of
freedom per atom. A central issue for large-scale calculations, however, is the
computation of the electron density (and subsequently, ground state properties)
from the discretized Hamiltonian in an efficient and scalable manner. We show
in this work how Chebyshev polynomial filtered subspace iteration (CheFSI) can
be used to address this issue and push the envelope in large-scale materials
simulations in a discontinuous Galerkin framework. We describe how the subspace
filtering steps can be performed in an efficient and scalable manner using a
two-dimensional parallelization scheme, thanks to the orthogonality of the DG
basis set and block-sparse structure of the DG Hamiltonian matrix. The
on-the-fly nature of the ALBs requires additional care in carrying out the
subspace iterations. We demonstrate the parallel scalability of the DG-CheFSI
approach in calculations of large-scale two-dimensional graphene sheets and
bulk three-dimensional lithium-ion electrolyte systems. Employing 55,296
computational cores, the time per self-consistent field iteration for a sample
of the bulk 3D electrolyte containing 8,586 atoms is 90 seconds, and the time
for a graphene sheet containing 11,520 atoms is 75 seconds.Comment: Submitted to The Journal of Chemical Physic
A GPU-based hyperbolic SVD algorithm
A one-sided Jacobi hyperbolic singular value decomposition (HSVD) algorithm,
using a massively parallel graphics processing unit (GPU), is developed. The
algorithm also serves as the final stage of solving a symmetric indefinite
eigenvalue problem. Numerical testing demonstrates the gains in speed and
accuracy over sequential and MPI-parallelized variants of similar Jacobi-type
HSVD algorithms. Finally, possibilities of hybrid CPU--GPU parallelism are
discussed.Comment: Accepted for publication in BIT Numerical Mathematic
- …