26,274 research outputs found

    Structure Preserving Parallel Algorithms for Solving the Bethe-Salpeter Eigenvalue Problem

    Full text link
    The Bethe-Salpeter eigenvalue problem is a dense structured eigenvalue problem arising from discretized Bethe-Salpeter equation in the context of computing exciton energies and states. A computational challenge is that at least half of the eigenvalues and the associated eigenvectors are desired in practice. We establish the equivalence between Bethe-Salpeter eigenvalue problems and real Hamiltonian eigenvalue problems. Based on theoretical analysis, structure preserving algorithms for a class of Bethe-Salpeter eigenvalue problems are proposed. We also show that for this class of problems all eigenvalues obtained from the Tamm-Dancoff approximation are overestimated. In order to solve large scale problems of practical interest, we discuss parallel implementations of our algorithms targeting distributed memory systems. Several numerical examples are presented to demonstrate the efficiency and accuracy of our algorithms

    Minimizing Communication for Eigenproblems and the Singular Value Decomposition

    Full text link
    Algorithms have two costs: arithmetic and communication. The latter represents the cost of moving data, either between levels of a memory hierarchy, or between processors over a network. Communication often dominates arithmetic and represents a rapidly increasing proportion of the total cost, so we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds were presented on the amount of communication required for essentially all O(n3)O(n^3)-like algorithms for linear algebra, including eigenvalue problems and the SVD. Conventional algorithms, including those currently implemented in (Sca)LAPACK, perform asymptotically more communication than these lower bounds require. In this paper we present parallel and sequential eigenvalue algorithms (for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms that do attain these lower bounds, and analyze their convergence and communication costs.Comment: 43 pages, 11 figure

    Fast and accurate con-eigenvalue algorithm for optimal rational approximations

    Full text link
    The need to compute small con-eigenvalues and the associated con-eigenvectors of positive-definite Cauchy matrices naturally arises when constructing rational approximations with a (near) optimally small LL^{\infty} error. Specifically, given a rational function with nn poles in the unit disk, a rational approximation with mnm\ll n poles in the unit disk may be obtained from the mmth con-eigenvector of an n×nn\times n Cauchy matrix, where the associated con-eigenvalue λm>0\lambda_{m}>0 gives the approximation error in the LL^{\infty} norm. Unfortunately, standard algorithms do not accurately compute small con-eigenvalues (and the associated con-eigenvectors) and, in particular, yield few or no correct digits for con-eigenvalues smaller than the machine roundoff. We develop a fast and accurate algorithm for computing con-eigenvalues and con-eigenvectors of positive-definite Cauchy matrices, yielding even the tiniest con-eigenvalues with high relative accuracy. The algorithm computes the mmth con-eigenvalue in O(m2n)\mathcal{O}(m^{2}n) operations and, since the con-eigenvalues of positive-definite Cauchy matrices decay exponentially fast, we obtain (near) optimal rational approximations in O(n(logδ1)2)\mathcal{O}(n(\log\delta^{-1})^{2}) operations, where δ\delta is the approximation error in the LL^{\infty} norm. We derive error bounds demonstrating high relative accuracy of the computed con-eigenvalues and the high accuracy of the unit con-eigenvectors. We also provide examples of using the algorithm to compute (near) optimal rational approximations of functions with singularities and sharp transitions, where approximation errors close to machine precision are obtained. Finally, we present numerical tests on random (complex-valued) Cauchy matrices to show that the algorithm computes all the con-eigenvalues and con-eigenvectors with nearly full precision

    The Anderson model of localization: a challenge for modern eigenvalue methods

    Get PDF
    We present a comparative study of the application of modern eigenvalue algorithms to an eigenvalue problem arising in quantum physics, namely, the computation of a few interior eigenvalues and their associated eigenvectors for the large, sparse, real, symmetric, and indefinite matrices of the Anderson model of localization. We compare the Lanczos algorithm in the 1987 implementation of Cullum and Willoughby with the implicitly restarted Arnoldi method coupled with polynomial and several shift-and-invert convergence accelerators as well as with a sparse hybrid tridiagonalization method. We demonstrate that for our problem the Lanczos implementation is faster and more memory efficient than the other approaches. This seemingly innocuous problem presents a major challenge for all modern eigenvalue algorithms.Comment: 16 LaTeX pages with 3 figures include
    corecore