66 research outputs found
Linear algebra meets Lie algebra: the Kostant-Wallach theory
In two languages, Linear Algebra and Lie Algebra, we describe the results of
Kostant and Wallach on the fibre of matrices with prescribed eigenvalues of all
leading principal submatrices. In addition, we present a brief introduction to
basic notions in Algebraic Geometry, Integrable Systems, and Lie Algebra aimed
at specialists in Linear Algebra.Comment: 27 pages, LaTeX; abstract adde
From qd to LR, or, how were the qd and LR algorithms discovered?
Perhaps, the most astonishing idea in eigenvalue computation is Rutishauser's idea of applying the LR transform to a matrix for generating a sequence of similar matrices that become more and more triangular. The same idea is the foundation of the ubiquitous QR algorithm. It is well known that this idea originated in Rutishauser's qd algorithm, which precedes the LR algorithm and can be understood as applying LR to a tridiagonal matrix. But how did Rutishauser discover qd and when did he find the qd-LR connection? We checked some of the early sources and have come up with an explanatio
Recommended from our members
Decomposition of a Symmetric Matrix ; CU-CS-080-75
An algorithm is presented to compute a triangular factorization and the inertia of a symmetric matrix. The algorithm is stable even when the matrix is not positive definite and is as fast as Choleski. Programs for solving associated system of linear equations are included
Recommended from our members
Performance and Accuracy of LAPACK's Symmetric TridiagonalEigensolvers
We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the Divide-and-Conquer method (DC), and the method of Multiple Relatively Robust Representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs, and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today's computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewest floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O({radical}ne). The accuracy of BI and MR is generally O(ne). (5) MR is preferable to BI for subset computations
For tridiagonals T replace T with LDLt
AbstractThe same number of parameters determine a tridiagonal matrix T and its triangular factors L, D and U. The mapping T→LDU is not well defined for all tridiagonals but, in finite precision arithmetic, L, D and U determine the entries of T to more than working precision. For the solution of linear equations LDUx=b the advantages of factorization are clear. Recent work has shown that LDU is also preferable for the eigenproblem, particularly in the symmetric case. This essay describes two of the ideas needed to compute eigenvectors that are orthogonal without recourse to the Gram–Schmidt procedure when some of the eigenvalues are tightly clustered. In the symmetric case we must replace T, or a translate of T, by its triangular factors LDLt
- …