26,848 research outputs found
Minimizing Communication for Eigenproblems and the Singular Value Decomposition
Algorithms have two costs: arithmetic and communication. The latter
represents the cost of moving data, either between levels of a memory
hierarchy, or between processors over a network. Communication often dominates
arithmetic and represents a rapidly increasing proportion of the total cost, so
we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds
were presented on the amount of communication required for essentially all
-like algorithms for linear algebra, including eigenvalue problems and
the SVD. Conventional algorithms, including those currently implemented in
(Sca)LAPACK, perform asymptotically more communication than these lower bounds
require. In this paper we present parallel and sequential eigenvalue algorithms
(for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms
that do attain these lower bounds, and analyze their convergence and
communication costs.Comment: 43 pages, 11 figure
Communication-optimal Parallel and Sequential Cholesky Decomposition
Numerical algorithms have two kinds of costs: arithmetic and communication,
by which we mean either moving data between levels of a memory hierarchy (in
the sequential case) or over a network connecting processors (in the parallel
case). Communication costs often dominate arithmetic costs, so it is of
interest to design algorithms minimizing communication. In this paper we first
extend known lower bounds on the communication cost (both for bandwidth and for
latency) of conventional (O(n^3)) matrix multiplication to Cholesky
factorization, which is used for solving dense symmetric positive definite
linear systems. Second, we compare the costs of various Cholesky decomposition
implementations to these lower bounds and identify the algorithms and data
structures that attain them. In the sequential case, we consider both the
two-level and hierarchical memory models. Combined with prior results in [13,
14, 15], this gives a set of communication-optimal algorithms for O(n^3)
implementations of the three basic factorizations of dense linear algebra: LU
with pivoting, QR and Cholesky. But it goes beyond this prior work on
sequential LU by optimizing communication for any number of levels of memory
hierarchy.Comment: 29 pages, 2 tables, 6 figure
Lanczos eigensolution method for high-performance computers
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors
Fast computation of spectral projectors of banded matrices
We consider the approximate computation of spectral projectors for symmetric
banded matrices. While this problem has received considerable attention,
especially in the context of linear scaling electronic structure methods, the
presence of small relative spectral gaps challenges existing methods based on
approximate sparsity. In this work, we show how a data-sparse approximation
based on hierarchical matrices can be used to overcome this problem. We prove a
priori bounds on the approximation error and propose a fast algo- rithm based
on the QDWH algorithm, along the works by Nakatsukasa et al. Numerical
experiments demonstrate that the performance of our algorithm is robust with
respect to the spectral gap. A preliminary Matlab implementation becomes faster
than eig already for matrix sizes of a few thousand.Comment: 27 pages, 10 figure
Solving rank structured Sylvester and Lyapunov equations
We consider the problem of efficiently solving Sylvester and Lyapunov
equations of medium and large scale, in case of rank-structured data, i.e.,
when the coefficient matrices and the right-hand side have low-rank
off-diagonal blocks. This comprises problems with banded data, recently studied
by Haber and Verhaegen in "Sparse solution of the Lyapunov equation for
large-scale interconnected systems", Automatica, 2016, and by Palitta and
Simoncini in "Numerical methods for large-scale Lyapunov equations with
symmetric banded data", SISC, 2018, which often arise in the discretization of
elliptic PDEs.
We show that, under suitable assumptions, the quasiseparable structure is
guaranteed to be numerically present in the solution, and explicit novel
estimates of the numerical rank of the off-diagonal blocks are provided.
Efficient solution schemes that rely on the technology of hierarchical
matrices are described, and several numerical experiments confirm the
applicability and efficiency of the approaches. We develop a MATLAB toolbox
that allows easy replication of the experiments and a ready-to-use interface
for the solvers. The performances of the different approaches are compared, and
we show that the new methods described are efficient on several classes of
relevant problems
PT-Scotch: A tool for efficient parallel graph ordering
The parallel ordering of large graphs is a difficult problem, because on the
one hand minimum degree algorithms do not parallelize well, and on the other
hand the obtainment of high quality orderings with the nested dissection
algorithm requires efficient graph bipartitioning heuristics, the best
sequential implementations of which are also hard to parallelize. This paper
presents a set of algorithms, implemented in the PT-Scotch software package,
which allows one to order large graphs in parallel, yielding orderings the
quality of which is only slightly worse than the one of state-of-the-art
sequential algorithms. Our implementation uses the classical nested dissection
approach but relies on several novel features to solve the parallel graph
bipartitioning problem. Thanks to these improvements, PT-Scotch produces
consistently better orderings than ParMeTiS on large numbers of processors
An a posteriori verification method for generalized real-symmetric eigenvalue problems in large-scale electronic state calculations
An a posteriori verification method is proposed for the generalized
real-symmetric eigenvalue problem and is applied to densely clustered
eigenvalue problems in large-scale electronic state calculations. The proposed
method is realized by a two-stage process in which the approximate solution is
computed by existing numerical libraries and is then verified in a moderate
computational time. The procedure returns intervals containing one exact
eigenvalue in each interval. Test calculations were carried out for organic
device materials, and the verification method confirms that all exact
eigenvalues are well separated in the obtained intervals. This verification
method will be integrated into EigenKernel (https://github.com/eigenkernel/),
which is middleware for various parallel solvers for the generalized eigenvalue
problem. Such an a posteriori verification method will be important in future
computational science.Comment: 15 pages, 7 figure
A GPU based real-time software correlation system for the Murchison Widefield Array prototype
Modern graphics processing units (GPUs) are inexpensive commodity hardware
that offer Tflop/s theoretical computing capacity. GPUs are well suited to many
compute-intensive tasks including digital signal processing.
We describe the implementation and performance of a GPU-based digital
correlator for radio astronomy. The correlator is implemented using the NVIDIA
CUDA development environment. We evaluate three design options on two
generations of NVIDIA hardware. The different designs utilize the internal
registers, shared memory and multiprocessors in different ways. We find that
optimal performance is achieved with the design that minimizes global memory
reads on recent generations of hardware.
The GPU-based correlator outperforms a single-threaded CPU equivalent by a
factor of 60 for a 32 antenna array, and runs on commodity PC hardware. The
extra compute capability provided by the GPU maximises the correlation
capability of a PC while retaining the fast development time associated with
using standard hardware, networking and programming languages. In this way, a
GPU-based correlation system represents a middle ground in design space between
high performance, custom built hardware and pure CPU-based software
correlation.
The correlator was deployed at the Murchison Widefield Array 32 antenna
prototype system where it ran in real-time for extended periods. We briefly
describe the data capture, streaming and correlation system for the prototype
array.Comment: 11 pages, to appear in PAS
Minimizing Communication in Linear Algebra
In 1981 Hong and Kung proved a lower bound on the amount of communication
needed to perform dense, matrix-multiplication using the conventional
algorithm, where the input matrices were too large to fit in the small, fast
memory. In 2004 Irony, Toledo and Tiskin gave a new proof of this result and
extended it to the parallel case. In both cases the lower bound may be
expressed as (#arithmetic operations / ), where M is the size
of the fast memory (or local memory in the parallel case). Here we generalize
these results to a much wider variety of algorithms, including LU
factorization, Cholesky factorization, factorization, QR factorization,
algorithms for eigenvalues and singular values, i.e., essentially all direct
methods of linear algebra. The proof works for dense or sparse matrices, and
for sequential or parallel algorithms. In addition to lower bounds on the
amount of data moved (bandwidth) we get lower bounds on the number of messages
required to move it (latency). We illustrate how to extend our lower bound
technique to compositions of linear algebra operations (like computing powers
of a matrix), to decide whether it is enough to call a sequence of simpler
optimal algorithms (like matrix multiplication) to minimize communication, or
if we can do better. We give examples of both. We also show how to extend our
lower bounds to certain graph theoretic problems.
We point out recently designed algorithms for dense LU, Cholesky, QR,
eigenvalue and the SVD problems that attain these lower bounds; implementations
of LU and QR show large speedups over conventional linear algebra algorithms in
standard libraries like LAPACK and ScaLAPACK. Many open problems remain.Comment: 27 pages, 2 table
- …