186 research outputs found

    Convergence of the Eberlein diagonalization method under the generalized serial pivot strategies

    Full text link
    The Eberlein method is a Jacobi-type process for solving the eigenvalue problem of an arbitrary matrix. In each iteration two transformations are applied on the underlying matrix, a plane rotation and a non-unitary elementary transformation. The paper studies the method under the broad class of generalized serial pivot strategies. We prove the global convergence of the Eberlein method under the generalized serial pivot strategies with permutations and present several numerical examples.Comment: 16 pages, 3 figure

    Convergence of scaled iterates by the Jacobi method

    Get PDF
    AbstractA quadratic convergence bound for scaled iterates by the serial Jacobi method for Hermitian positive definite matrices is derived. By scaled iterates we mean the matrices [diag(H(k))]−1/2H(k)[diag(H(k))]−1/2, where H(k), k⩾0, are matrices generated by the method. The bound is obtained in the general case of multiple eigenvalues. It depends on the minimum relative separation of the eigenvalues

    Design and analysis of numerical algorithms for the solution of linear systems on parallel and distributed architectures

    Get PDF
    The increasing availability of parallel computers is having a very significant impact on all aspects of scientific computation, including algorithm research and software development in numerical linear algebra. In particular, the solution of linear systems, which lies at the heart of most calculations in scientific computing is an important computation found in many engineering and scientific applications. In this thesis, well-known parallel algorithms for the solution of linear systems are compared with implicit parallel algorithms or the Quadrant Interlocking (QI) class of algorithms to solve linear systems. These implicit algorithms are (2x2) block algorithms expressed in explicit point form notation. [Continues.

    Jacobi-like algorithms for the indefinite generalized Hermitian eigenvalue problem

    Get PDF
    We discuss structure-preserving Jacobi-like algorithms for the solution of the indefinite generalized Hermitian eigenvalue problem. We discuss a method based on the solution of Hermitian 4-by-4 subproblems which generalizes the Jacobi-like method of Bunse-Gerstner/Faßbender for Hamiltonian matrices. Furthermore, we discuss structure-preserving Jacobi-like methods based on the solution of non-Hermitian 2-by-2 subproblems. For these methods a local convergence proof is given. Numerical test results for the comparison of the proposed methods are presented

    Reducing Communication in the Solution of Linear Systems

    Get PDF
    There is a growing performance gap between computation and communication on modern computers, making it crucial to develop algorithms with lower latency and bandwidth requirements. Because systems of linear equations are important for numerous scientific and engineering applications, I have studied several approaches for reducing communication in those problems. First, I developed optimizations to dense LU with partial pivoting, which downstream applications can adopt with little to no effort. Second, I consider two techniques to completely replace pivoting in dense LU, which can provide significantly higher speedups, albeit without the same numerical guarantees as partial pivoting. One technique uses randomized preprocessing, while the other is a novel combination of block factorization and additive perturbation. Finally, I investigate using mixed precision in GMRES for solving sparse systems, which reduces the volume of data movement, and thus, the pressure on the memory bandwidth

    A Kogbetliantz-type algorithm for the hyperbolic SVD

    Full text link
    In this paper a two-sided, parallel Kogbetliantz-type algorithm for the hyperbolic singular value decomposition (HSVD) of real and complex square matrices is developed, with a single assumption that the input matrix, of order nn, admits such a decomposition into the product of a unitary, a non-negative diagonal, and a JJ-unitary matrix, where JJ is a given diagonal matrix of positive and negative signs. When J=±IJ=\pm I, the proposed algorithm computes the ordinary SVD. The paper's most important contribution -- a derivation of formulas for the HSVD of 2×22\times 2 matrices -- is presented first, followed by the details of their implementation in floating-point arithmetic. Next, the effects of the hyperbolic transformations on the columns of the iteration matrix are discussed. These effects then guide a redesign of the dynamic pivot ordering, being already a well-established pivot strategy for the ordinary Kogbetliantz algorithm, for the general, n×nn\times n HSVD. A heuristic but sound convergence criterion is then proposed, which contributes to high accuracy demonstrated in the numerical testing results. Such a JJ-Kogbetliantz algorithm as presented here is intrinsically slow, but is nevertheless usable for matrices of small orders.Comment: a heavily revised version with 32 pages and 4 figure
    • …
    corecore