11,222 research outputs found

    Solving large sparse eigenvalue problems on supercomputers

    Get PDF
    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed

    A Self-learning Algebraic Multigrid Method for Extremal Singular Triplets and Eigenpairs

    Full text link
    A self-learning algebraic multigrid method for dominant and minimal singular triplets and eigenpairs is described. The method consists of two multilevel phases. In the first, multiplicative phase (setup phase), tentative singular triplets are calculated along with a multigrid hierarchy of interpolation operators that approximately fit the tentative singular vectors in a collective and self-learning manner, using multiplicative update formulas. In the second, additive phase (solve phase), the tentative singular triplets are improved up to the desired accuracy by using an additive correction scheme with fixed interpolation operators, combined with a Ritz update. A suitable generalization of the singular value decomposition is formulated that applies to the coarse levels of the multilevel cycles. The proposed algorithm combines and extends two existing multigrid approaches for symmetric positive definite eigenvalue problems to the case of dominant and minimal singular triplets. Numerical tests on model problems from different areas show that the algorithm converges to high accuracy in a modest number of iterations, and is flexible enough to deal with a variety of problems due to its self-learning properties.Comment: 29 page

    The Anderson model of localization: a challenge for modern eigenvalue methods

    Get PDF
    We present a comparative study of the application of modern eigenvalue algorithms to an eigenvalue problem arising in quantum physics, namely, the computation of a few interior eigenvalues and their associated eigenvectors for the large, sparse, real, symmetric, and indefinite matrices of the Anderson model of localization. We compare the Lanczos algorithm in the 1987 implementation of Cullum and Willoughby with the implicitly restarted Arnoldi method coupled with polynomial and several shift-and-invert convergence accelerators as well as with a sparse hybrid tridiagonalization method. We demonstrate that for our problem the Lanczos implementation is faster and more memory efficient than the other approaches. This seemingly innocuous problem presents a major challenge for all modern eigenvalue algorithms.Comment: 16 LaTeX pages with 3 figures include
    corecore