2,048 research outputs found
Analysis of A Splitting Approach for the Parallel Solution of Linear Systems on GPU Cards
We discuss an approach for solving sparse or dense banded linear systems
on a Graphics Processing Unit (GPU) card. The
matrix is possibly nonsymmetric and
moderately large; i.e., . The ${\it split\ and\
parallelize}{\tt SaP}{\bf A}{\bf A}_ii=1,\ldots,P{\bf A}_i{\tt SaP::GPU}{\tt PARDISO}{\tt SuperLU}{\tt MUMPS}{\tt SaP::GPU}{\tt MKL}{\tt SaP::GPU}{\tt SaP::GPU}$ is publicly available and distributed as
open source under a permissive BSD3 license.Comment: 38 page
An efficient multi-core implementation of a novel HSS-structured multifrontal solver using randomized sampling
We present a sparse linear system solver that is based on a multifrontal
variant of Gaussian elimination, and exploits low-rank approximation of the
resulting dense frontal matrices. We use hierarchically semiseparable (HSS)
matrices, which have low-rank off-diagonal blocks, to approximate the frontal
matrices. For HSS matrix construction, a randomized sampling algorithm is used
together with interpolative decompositions. The combination of the randomized
compression with a fast ULV HSS factorization leads to a solver with lower
computational complexity than the standard multifrontal method for many
applications, resulting in speedups up to 7 fold for problems in our test
suite. The implementation targets many-core systems by using task parallelism
with dynamic runtime scheduling. Numerical experiments show performance
improvements over state-of-the-art sparse direct solvers. The implementation
achieves high performance and good scalability on a range of modern shared
memory parallel systems, including the Intel Xeon Phi (MIC). The code is part
of a software package called STRUMPACK -- STRUctured Matrices PACKage, which
also has a distributed memory component for dense rank-structured matrices
Iterative solutions to the steady state density matrix for optomechanical systems
We present a sparse matrix permutation from graph theory that gives stable
incomplete Lower-Upper (LU) preconditioners necessary for iterative solutions
to the steady state density matrix for quantum optomechanical systems. This
reordering is efficient, adding little overhead to the computation, and results
in a marked reduction in both memory and runtime requirements compared to other
solution methods, with performance gains increasing with system size. Either of
these benchmarks can be tuned via the preconditioner accuracy and solution
tolerance. This reordering optimizes the condition number of the approximate
inverse, and is the only method found to be stable at large Hilbert space
dimensions. This allows for steady state solutions to otherwise intractable
quantum optomechanical systems.Comment: 10 pages, 5 figure
Highly parallel sparse Cholesky factorization
Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms
Using a multifrontal sparse solver in a high performance, finite element code
We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP
A domain decomposing parallel sparse linear system solver
The solution of large sparse linear systems is often the most time-consuming
part of many science and engineering applications. Computational fluid
dynamics, circuit simulation, power network analysis, and material science are
just a few examples of the application areas in which large sparse linear
systems need to be solved effectively. In this paper we introduce a new
parallel hybrid sparse linear system solver for distributed memory
architectures that contains both direct and iterative components. We show that
by using our solver one can alleviate the drawbacks of direct and iterative
solvers, achieving better scalability than with direct solvers and more
robustness than with classical preconditioned iterative solvers. Comparisons to
well-known direct and iterative solvers on a parallel architecture are
provided.Comment: To appear in Journal of Computational and Applied Mathematic
- …