955 research outputs found
A hierarchically blocked Jacobi SVD algorithm for single and multiple graphics processing units
We present a hierarchically blocked one-sided Jacobi algorithm for the
singular value decomposition (SVD), targeting both single and multiple graphics
processing units (GPUs). The blocking structure reflects the levels of GPU's
memory hierarchy. The algorithm may outperform MAGMA's dgesvd, while retaining
high relative accuracy. To this end, we developed a family of parallel pivot
strategies on GPU's shared address space, but applicable also to inter-GPU
communication. Unlike common hybrid approaches, our algorithm in a single GPU
setting needs a CPU for the controlling purposes only, while utilizing GPU's
resources to the fullest extent permitted by the hardware. When required by the
problem size, the algorithm, in principle, scales to an arbitrary number of GPU
nodes. The scalability is demonstrated by more than twofold speedup for
sufficiently large matrices on a Tesla S2050 system with four GPUs vs. a single
Fermi card.Comment: Accepted for publication in SIAM Journal on Scientific Computin
A GPU-based hyperbolic SVD algorithm
A one-sided Jacobi hyperbolic singular value decomposition (HSVD) algorithm,
using a massively parallel graphics processing unit (GPU), is developed. The
algorithm also serves as the final stage of solving a symmetric indefinite
eigenvalue problem. Numerical testing demonstrates the gains in speed and
accuracy over sequential and MPI-parallelized variants of similar Jacobi-type
HSVD algorithms. Finally, possibilities of hybrid CPU--GPU parallelism are
discussed.Comment: Accepted for publication in BIT Numerical Mathematic
Novel Modifications of Parallel Jacobi Algorithms
We describe two main classes of one-sided trigonometric and hyperbolic
Jacobi-type algorithms for computing eigenvalues and eigenvectors of Hermitian
matrices. These types of algorithms exhibit significant advantages over many
other eigenvalue algorithms. If the matrices permit, both types of algorithms
compute the eigenvalues and eigenvectors with high relative accuracy.
We present novel parallelization techniques for both trigonometric and
hyperbolic classes of algorithms, as well as some new ideas on how pivoting in
each cycle of the algorithm can improve the speed of the parallel one-sided
algorithms. These parallelization approaches are applicable to both
distributed-memory and shared-memory machines.
The numerical testing performed indicates that the hyperbolic algorithms may
be superior to the trigonometric ones, although, in theory, the latter seem
more natural.Comment: Accepted for publication in Numerical Algorithm
Three-Level Parallel J-Jacobi Algorithms for Hermitian Matrices
The paper describes several efficient parallel implementations of the
one-sided hyperbolic Jacobi-type algorithm for computing eigenvalues and
eigenvectors of Hermitian matrices. By appropriate blocking of the algorithms
an almost ideal load balancing between all available processors/cores is
obtained. A similar blocking technique can be used to exploit local cache
memory of each processor to further speed up the process. Due to diversity of
modern computer architectures, each of the algorithms described here may be the
method of choice for a particular hardware and a given matrix size. All
proposed block algorithms compute the eigenvalues with relative accuracy
similar to the original non-blocked Jacobi algorithm.Comment: Submitted for publicatio
Approximate matrix and tensor diagonalization by unitary transformations: convergence of Jacobi-type algorithms
We propose a gradient-based Jacobi algorithm for a class of maximization
problems on the unitary group, with a focus on approximate diagonalization of
complex matrices and tensors by unitary transformations. We provide weak
convergence results, and prove local linear convergence of this algorithm.The
convergence results also apply to the case of real-valued tensors
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures
Convergence of the Eberlein diagonalization method under the generalized serial pivot strategies
The Eberlein method is a Jacobi-type process for solving the eigenvalue
problem of an arbitrary matrix. In each iteration two transformations are
applied on the underlying matrix, a plane rotation and a non-unitary elementary
transformation. The paper studies the method under the broad class of
generalized serial pivot strategies. We prove the global convergence of the
Eberlein method under the generalized serial pivot strategies with permutations
and present several numerical examples.Comment: 16 pages, 3 figure
- …