928 research outputs found
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures
A Jacobi-based algorithm for computing symmetric eigenvalues and eigenvectors in a two-dimensional mesh
The paper proposes an algorithm for computing symmetric eigenvalues and eigenvectors that uses a one-sided Jacobi approach and is targeted to a multicomputer in which nodes can be arranged as a two-dimensional mesh with an arbitrary number of rows and columns. The algorithm is analysed through simple analytical models of execution time, which show that an adequate choice of the mesh configuration (number of rows and columns) can improve performance significantly, with respect to a one-dimensional configuration, which is the most frequently considered scenario in current proposals. This improvement is especially noticeable in large systems.Peer ReviewedPostprint (published version
A hierarchically blocked Jacobi SVD algorithm for single and multiple graphics processing units
We present a hierarchically blocked one-sided Jacobi algorithm for the
singular value decomposition (SVD), targeting both single and multiple graphics
processing units (GPUs). The blocking structure reflects the levels of GPU's
memory hierarchy. The algorithm may outperform MAGMA's dgesvd, while retaining
high relative accuracy. To this end, we developed a family of parallel pivot
strategies on GPU's shared address space, but applicable also to inter-GPU
communication. Unlike common hybrid approaches, our algorithm in a single GPU
setting needs a CPU for the controlling purposes only, while utilizing GPU's
resources to the fullest extent permitted by the hardware. When required by the
problem size, the algorithm, in principle, scales to an arbitrary number of GPU
nodes. The scalability is demonstrated by more than twofold speedup for
sufficiently large matrices on a Tesla S2050 system with four GPUs vs. a single
Fermi card.Comment: Accepted for publication in SIAM Journal on Scientific Computin
Convergence of the Eberlein diagonalization method under the generalized serial pivot strategies
The Eberlein method is a Jacobi-type process for solving the eigenvalue
problem of an arbitrary matrix. In each iteration two transformations are
applied on the underlying matrix, a plane rotation and a non-unitary elementary
transformation. The paper studies the method under the broad class of
generalized serial pivot strategies. We prove the global convergence of the
Eberlein method under the generalized serial pivot strategies with permutations
and present several numerical examples.Comment: 16 pages, 3 figure
Schwarz Iterative Methods: Infinite Space Splittings
We prove the convergence of greedy and randomized versions of Schwarz
iterative methods for solving linear elliptic variational problems based on
infinite space splittings of a Hilbert space. For the greedy case, we show a
squared error decay rate of for elements of an approximation
space related to the underlying splitting. For the randomized
case, we show an expected squared error decay rate of on a
class depending on the
probability distribution.Comment: Revised version, accepted in Constr. Appro
- …