4,288 research outputs found
High-Performance Solvers for Dense Hermitian Eigenproblems
We introduce a new collection of solvers - subsequently called EleMRRR - for
large-scale dense Hermitian eigenproblems. EleMRRR solves various types of
problems: generalized, standard, and tridiagonal eigenproblems. Among these,
the last is of particular importance as it is a solver on its own right, as
well as the computational kernel for the first two; we present a fast and
scalable tridiagonal solver based on the Algorithm of Multiple Relatively
Robust Representations - referred to as PMRRR. Like the other EleMRRR solvers,
PMRRR is part of the freely available Elemental library, and is designed to
fully support both message-passing (MPI) and multithreading parallelism (SMP).
As a result, the solvers can equally be used in pure MPI or in hybrid MPI-SMP
fashion. We conducted a thorough performance study of EleMRRR and ScaLAPACK's
solvers on two supercomputers. Such a study, performed with up to 8,192 cores,
provides precise guidelines to assemble the fastest solver within the ScaLAPACK
framework; it also indicates that EleMRRR outperforms even the fastest solvers
built from ScaLAPACK's components
Lanczos eigensolution method for high-performance computers
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors
Parallel eigensolvers in plane-wave Density Functional Theory
We consider the problem of parallelizing electronic structure computations in
plane-wave Density Functional Theory. Because of the limited scalability of
Fourier transforms, parallelism has to be found at the eigensolver level. We
show how a recently proposed algorithm based on Chebyshev polynomials can scale
into the tens of thousands of processors, outperforming block conjugate
gradient algorithms for large computations
Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in hypre and PETSc
We describe our software package Block Locally Optimal Preconditioned
Eigenvalue Xolvers (BLOPEX) publicly released recently. BLOPEX is available as
a stand-alone serial library, as an external package to PETSc (``Portable,
Extensible Toolkit for Scientific Computation'', a general purpose suite of
tools for the scalable solution of partial differential equations and related
problems developed by Argonne National Laboratory), and is also built into {\it
hypre} (``High Performance Preconditioners'', scalable linear solvers package
developed by Lawrence Livermore National Laboratory). The present BLOPEX
release includes only one solver--the Locally Optimal Block Preconditioned
Conjugate Gradient (LOBPCG) method for symmetric eigenvalue problems. {\it
hypre} provides users with advanced high-quality parallel preconditioners for
linear systems, in particular, with domain decomposition and multigrid
preconditioners. With BLOPEX, the same preconditioners can now be efficiently
used for symmetric eigenvalue problems. PETSc facilitates the integration of
independently developed application modules with strict attention to component
interoperability, and makes BLOPEX extremely easy to compile and use with
preconditioners that are available via PETSc. We present the LOBPCG algorithm
in BLOPEX for {\it hypre} and PETSc. We demonstrate numerically the scalability
of BLOPEX by testing it on a number of distributed and shared memory parallel
systems, including a Beowulf system, SUN Fire 880, an AMD dual-core Opteron
workstation, and IBM BlueGene/L supercomputer, using PETSc domain decomposition
and {\it hypre} multigrid preconditioning. We test BLOPEX on a model problem,
the standard 7-point finite-difference approximation of the 3-D Laplacian, with
the problem size in the range .Comment: Submitted to SIAM Journal on Scientific Computin
- …