749 research outputs found
Differential qd algorithm with shifts for rank-structured matrices
Although QR iterations dominate in eigenvalue computations, there are several
important cases when alternative LR-type algorithms may be preferable. In
particular, in the symmetric tridiagonal case where differential qd algorithm
with shifts (dqds) proposed by Fernando and Parlett enjoys often faster
convergence while preserving high relative accuracy (that is not guaranteed in
QR algorithm). In eigenvalue computations for rank-structured matrices QR
algorithm is also a popular choice since, in the symmetric case, the rank
structure is preserved. In the unsymmetric case, however, QR algorithm destroys
the rank structure and, hence, LR-type algorithms come to play once again. In
the current paper we discover several variants of qd algorithms for
quasiseparable matrices. Remarkably, one of them, when applied to Hessenberg
matrices becomes a direct generalization of dqds algorithm for tridiagonal
matrices. Therefore, it can be applied to such important matrices as companion
and confederate, and provides an alternative algorithm for finding roots of a
polynomial represented in the basis of orthogonal polynomials. Results of
preliminary numerical experiments are presented
A Hamiltonian Krylov-Schur-type method based on the symplectic Lanczos process
We discuss a Krylov-Schur like restarting technique applied within the symplectic Lanczos algorithm for the Hamiltonian eigenvalue problem. This allows to easily implement a purging and locking strategy in order to improve the convergence properties of the symplectic Lanczos algorithm. The Krylov-Schur-like restarting is based on the SR algorithm. Some ingredients of the latter need to be adapted to the structure of the symplectic Lanczos recursion. We demonstrate the efficiency of the new method for several Hamiltonian eigenproblems
Minimizing Communication in Linear Algebra
In 1981 Hong and Kung proved a lower bound on the amount of communication
needed to perform dense, matrix-multiplication using the conventional
algorithm, where the input matrices were too large to fit in the small, fast
memory. In 2004 Irony, Toledo and Tiskin gave a new proof of this result and
extended it to the parallel case. In both cases the lower bound may be
expressed as (#arithmetic operations / ), where M is the size
of the fast memory (or local memory in the parallel case). Here we generalize
these results to a much wider variety of algorithms, including LU
factorization, Cholesky factorization, factorization, QR factorization,
algorithms for eigenvalues and singular values, i.e., essentially all direct
methods of linear algebra. The proof works for dense or sparse matrices, and
for sequential or parallel algorithms. In addition to lower bounds on the
amount of data moved (bandwidth) we get lower bounds on the number of messages
required to move it (latency). We illustrate how to extend our lower bound
technique to compositions of linear algebra operations (like computing powers
of a matrix), to decide whether it is enough to call a sequence of simpler
optimal algorithms (like matrix multiplication) to minimize communication, or
if we can do better. We give examples of both. We also show how to extend our
lower bounds to certain graph theoretic problems.
We point out recently designed algorithms for dense LU, Cholesky, QR,
eigenvalue and the SVD problems that attain these lower bounds; implementations
of LU and QR show large speedups over conventional linear algebra algorithms in
standard libraries like LAPACK and ScaLAPACK. Many open problems remain.Comment: 27 pages, 2 table
Computing the Kalman form
We present two algorithms for the computation of the Kalman form of a linear
control system. The first one is based on the technique developed by
Keller-Gehrig for the computation of the characteristic polynomial. The cost is
a logarithmic number of matrix multiplications. To our knowledge, this improves
the best previously known algebraic complexity by an order of magnitude. Then
we also present a cubic algorithm proven to more efficient in practice.Comment: 10 page
Computing a partial Schur factorization of nonlinear eigenvalue problems using the infinite Arnoldi method
The partial Schur factorization can be used to represent several eigenpairs
of a matrix in a numerically robust way. Different adaptions of the Arnoldi
method are often used to compute partial Schur factorizations. We propose here
a technique to compute a partial Schur factorization of a nonlinear eigenvalue
problem (NEP). The technique is inspired by the algorithm in [8], now called
the infinite Arnoldi method. The infinite Arnoldi method is a method designed
for NEPs, and can be interpreted as Arnoldi's method applied to a linear
infinite-dimensional operator, whose reciprocal eigenvalues are the solutions
to the NEP. As a first result we show that the invariant pairs of the operator
are equivalent to invariant pairs of the NEP. We characterize the structure of
the invariant pairs of the operator and show how one can carry out a
modification of the infinite Arnoldi method by respecting the structure. This
also allows us to naturally add the feature known as locking. We nest this
algorithm with an outer iteration, where the infinite Arnoldi method for a
particular type of structured functions is appropriately restarted. The
restarting exploits the structure and is inspired by the well-known implicitly
restarted Arnoldi method for standard eigenvalue problems. The final algorithm
is applied to examples from a benchmark collection, showing that both
processing time and memory consumption can be considerably reduced with the
restarting technique
An atlas for tridiagonal isospectral manifolds
Let be the compact manifold of real symmetric tridiagonal
matrices conjugate to a given diagonal matrix with simple spectrum.
We introduce {\it bidiagonal coordinates}, charts defined on open dense domains
forming an explicit atlas for . In contrast to the standard
inverse variables, consisting of eigenvalues and norming constants, every
matrix in now lies in the interior of some chart domain. We
provide examples of the convenience of these new coordinates for the study of
asymptotics of isospectral dynamics, both for continuous and discrete time.Comment: Fixed typos; 16 pages, 3 figure
- …