14,236 research outputs found
Orthogonal Cauchy-like matrices
Cauchy-like matrices arise often as building blocks in decomposition formulas and fast algorithms for various displacement-structured matrices. A complete characterization for orthogonal Cauchy-like matrices is given here. In particular, we show that orthogonal Cauchy-like matrices correspond to eigenvector matrices of certain symmetric matrices related to the solution of secular equations. Moreover, the construction of orthogonal Cauchy-like matrices is related to that of orthogonal rational functions with variable poles
Computing a partial Schur factorization of nonlinear eigenvalue problems using the infinite Arnoldi method
The partial Schur factorization can be used to represent several eigenpairs
of a matrix in a numerically robust way. Different adaptions of the Arnoldi
method are often used to compute partial Schur factorizations. We propose here
a technique to compute a partial Schur factorization of a nonlinear eigenvalue
problem (NEP). The technique is inspired by the algorithm in [8], now called
the infinite Arnoldi method. The infinite Arnoldi method is a method designed
for NEPs, and can be interpreted as Arnoldi's method applied to a linear
infinite-dimensional operator, whose reciprocal eigenvalues are the solutions
to the NEP. As a first result we show that the invariant pairs of the operator
are equivalent to invariant pairs of the NEP. We characterize the structure of
the invariant pairs of the operator and show how one can carry out a
modification of the infinite Arnoldi method by respecting the structure. This
also allows us to naturally add the feature known as locking. We nest this
algorithm with an outer iteration, where the infinite Arnoldi method for a
particular type of structured functions is appropriately restarted. The
restarting exploits the structure and is inspired by the well-known implicitly
restarted Arnoldi method for standard eigenvalue problems. The final algorithm
is applied to examples from a benchmark collection, showing that both
processing time and memory consumption can be considerably reduced with the
restarting technique
A framework for structured linearizations of matrix polynomials in various bases
We present a framework for the construction of linearizations for scalar and
matrix polynomials based on dual bases which, in the case of orthogonal
polynomials, can be described by the associated recurrence relations. The
framework provides an extension of the classical linearization theory for
polynomials expressed in non-monomial bases and allows to represent polynomials
expressed in product families, that is as a linear combination of elements of
the form , where and
can either be polynomial bases or polynomial families
which satisfy some mild assumptions. We show that this general construction can
be used for many different purposes. Among them, we show how to linearize sums
of polynomials and rational functions expressed in different bases. As an
example, this allows to look for intersections of functions interpolated on
different nodes without converting them to the same basis. We then provide some
constructions for structured linearizations for -even and
-palindromic matrix polynomials. The extensions of these constructions
to -odd and -antipalindromic of odd degree is discussed and
follows immediately from the previous results
Fast and accurate con-eigenvalue algorithm for optimal rational approximations
The need to compute small con-eigenvalues and the associated con-eigenvectors
of positive-definite Cauchy matrices naturally arises when constructing
rational approximations with a (near) optimally small error.
Specifically, given a rational function with poles in the unit disk, a
rational approximation with poles in the unit disk may be obtained
from the th con-eigenvector of an Cauchy matrix, where the
associated con-eigenvalue gives the approximation error in the
norm. Unfortunately, standard algorithms do not accurately compute
small con-eigenvalues (and the associated con-eigenvectors) and, in particular,
yield few or no correct digits for con-eigenvalues smaller than the machine
roundoff. We develop a fast and accurate algorithm for computing
con-eigenvalues and con-eigenvectors of positive-definite Cauchy matrices,
yielding even the tiniest con-eigenvalues with high relative accuracy. The
algorithm computes the th con-eigenvalue in operations
and, since the con-eigenvalues of positive-definite Cauchy matrices decay
exponentially fast, we obtain (near) optimal rational approximations in
operations, where is the
approximation error in the norm. We derive error bounds
demonstrating high relative accuracy of the computed con-eigenvalues and the
high accuracy of the unit con-eigenvectors. We also provide examples of using
the algorithm to compute (near) optimal rational approximations of functions
with singularities and sharp transitions, where approximation errors close to
machine precision are obtained. Finally, we present numerical tests on random
(complex-valued) Cauchy matrices to show that the algorithm computes all the
con-eigenvalues and con-eigenvectors with nearly full precision
- …