9 research outputs found
A note on generalized companion pencils in the monomial basis
In this paper, we introduce a new notion of generalized companion pencils for scalar polynomials over an arbitrary field expressed in the monomial basis. Our definition is quite general and extends the notions of companion pencil in De Terán et al. (Linear Algebra Appl 459:264&-333, 2014), generalized companion matrix in Garnett et al. (Linear Algebra Appl 498:360&-365, 2016), and Ma&-Zhan companion matrices in Ma and Zhan (Linear Algebra Appl 438: 621&-625, 2013), as well as the class of quasi-sparse companion pencils introduced in De Terán and Hernando (INdAM Series, Springer, Berlin, pp 157&-179, 2019). We analyze some algebraic properties of generalized companion pencils. We determine their Smith canonical form and we prove that they are all nonderogatory. In the last part of the work we will pay attention to the sparsity of these constructions. In particular, by imposing some natural conditions on its entries, we determine the smallest number of nonzero entries of a generalized companion pencilThis work has been partially supported by the Ministerio de EconomÃa y Competitividad of Spain through Grants MTM2017-90682-REDT and MTM2015-65798-P
The Ehrlich-Aberth method for palindromic matrix polynomials represented in the Dickson basis
An algorithm based on the Ehrlich-Aberth root-finding method is presented for
the computation of the eigenvalues of a T-palindromic matrix polynomial. A
structured linearization of the polynomial represented in the Dickson basis is
introduced in order to exploit the symmetry of the roots by halving the total
number of the required approximations. The rank structure properties of the
linearization allow the design of a fast and numerically robust implementation
of the root-finding iteration. Numerical experiments that confirm the
effectiveness and the robustness of the approach are provided.Comment: in press in Linear Algebra Appl. (2011
Eigenvectors and minimal bases for some families of Fiedler-like linearizations
In this paper, we obtain formulas for the left and right eigenvectors and minimal bases of some families of Fiedler-like linearizations of square matrix polynomials. In particular, for the families of Fiedler pencils, generalized Fiedler pencils and Fiedler pencils with repetition. These formulas allow us to relate the eigenvectors and minimal bases of the linearizations with the ones of the polynomial. Since the eigenvectors appear in the standard formula of the condition number of eigenvalues of matrix polynomials, our results may be used to compare the condition numbers of eigenvalues of the linearizations within these families and the corresponding condition number of the polynomial eigenvalue problem.Publicad
Palindromic Companion Forms for Matrix Polynomials of Odd Degree
The standard way to solve polynomial eigenvalue problems P(\la)x=0
is to convert the matrix polynomial P(\la)
into a matrix pencil
that preserves its spectral information --
a process known as linearization.
When P(\la) is palindromic,
the eigenvalues, elementary divisors,
and minimal indices of P(\la)
have certain symmetries that can be lost
when using the classical first and second
Frobenius companion linearizations
for numerical computations,
since these linearizations
do not preserve the palindromic structure.
Recently new families of pencils have been introduced
with the goal of finding linearizations that retain whatever structure
the original P(\la) might possess,
with particular attention to the preservation
of palindromic structure.
However, no general construction of palindromic linearizations valid for all palindromic polynomials
has as yet been achieved.
In this paper we present a family of linearizations
for odd degree polynomials P(\la)
which are palindromic whenever P(\la) is,
and which are valid for all palindromic polynomials
of odd degree.
We illustrate our construction with several examples.
In addition, we establish a simple way to recover the minimal indices of the polynomial
from those of the linearizations in the new family
Polynomial Eigenproblems: a Root-Finding Approach
A matrix polynomial, also known as a polynomial matrix,
is a polynomial whose coefficients are matrices; or, equivalently, a matrix whose elements are polynomials.
If the matrix polynomial P(x) is regular, that is if p(x):=det(P(x)) is not identically zero,
the polynomial eigenvalue problem associated with P(x) is equivalent to the
computation of the roots of the polynomial p(x); such roots are called the
eigenvalues of the regular matrix polynomial P(x). Sometimes, one is also interested in computing
the corresponding (left and right) eigenvectors.
Recently, much literature has been addressed to the polynomial
eigenvalue problem. This line of research is currently very active: the theoretical properties of PEPs are studied,
and fast and numerically stable methods are sought
for their numerical solution. The most commonly encountered
case is the one of degree 2 polynomials,
but there exist applications where higher degree polynomials appear. More generally, PEPs are special
cases belonging to the wider class of nonlinear eigenvalue problems. Amongst nonlinear eigenvalue problems, rational eigenvalue
problems
can be immediately brought to polynomial form, multiplying them by their least common denominator; truly
nonlinear eigenvalue problems may be approximated with PEPs, truncating some matrix power series, or with
rational eigenproblems, using rational approximants such as Padé approximants.
To approximate numerically the solutions of PEPs, several
algorithms have been introduced based on the technique of
linearization where the polynomial problem is replaced by a linear
pencil with larger size and the customary methods for the generalised
eigenvalue problem, like for instance the QZ algorithm, are applied.
This thesis is addressed to the design and analysis of
algorithms for the polynomial eigenvalue problem based on a root-finding approach.
A root-finder is applied to the
characteristic equation p(x)=0. In particular, we discuss algorithms based on the
Ehrlich-Aberth iteration.
The Ehrlich-Aberth iteration (EAI) is a method that simultaneously approximates all
the roots of a (scalar) polynomial.
In order to adapt the EAI to the numerical solution of a PEP,
we propose a method based on the Jacobi formula; two implementation of the EAI are
discussed, of which one uses a linearization and the other works directly on the matrix polynomial.
The algorithm that we propose has quadratic computational complexity with respect to
the degree k of the matrix polynomial. This leads to computational advantage when the ratio k^2/n, where
n is the dimension of the matrix coefficients, is large.
Cases of this kind can be encountered, for instance,
in the truncation of matrix power series. If k^2/n is small, the EAI can be implemented
in such a way that its asymptotic complexity is cubic (or slightly supercubic) in nk, but QZ-based methods appear to be
faster in this case. Nevertheless, experiments suggest that the EAI can improve the approximations of the QZ in terms of forward
error, so that even when it is not as fast as other algorithms it is still suitable as a refinement method.
The EAI does not compute the eigenvectors. If they are needed, the EAI
can be combined with other methods such as the SVD or the inverse iteration. In the
experiments we performed, eigenvectors were computed in this way, and
they were approximated with higher accuracy with respect to the QZ.
Another root-finding approach to PEPs, similar to the EAI, is to
apply in sequence the Newton method to each single eigenvalue, using an implicit deflation of the previously
computed roots of the determinant
in order to avoid to approximate twice the same eigenvalue. Our numerical experience
suggests that in terms of efficiency the EAI is superior with respect to the sequential Newton method with deflation.
Specific attention concerns structured problems where the matrix
coefficients have some additional feature which is reflected on
structural properties of the roots. For instance, in the case of
T-palindromic polynomials, the roots are encountered in pairs (x,1/x).
In this case the goal is to design
algorithms which take advantage of this additional information about
the eigenvalues and deliver approximations to the eigenvalues which
respect these symmetries independently of the rounding
errors.
Within this setting, we study the case of polynomials endowed with specific properties
like, for instance, palindromic, T-palindromic, Hamiltonian, symplectic, even/odd, etc.,
whose eigenvalues have special symmetries in the complex plane.
In general, we
may consider the case of structures where the roots can be grouped in
pairs as (x,f(x)), where f(x) is any self-inverse analytic function such that.
We propose a unifying treatment of structured polynomials belonging to this class
and show how the EAI can be adapted to deal with them in a
very effective way. Several structured variants of the EAI are available to this goal:
they are described in this thesis.
All of such variants enable one to
compute only a subset of eigenvalues and to recover the remaining part
of the spectrum by means of the symmetries satisfied by the
eigenvalues. By exploiting the structure of the problem, this
approach leads to a saving on the number of floating point operations and provides
algorithms which yield numerical approximations fulfilling
the symmetry properties. Our research on the structured EAI can of course be applied also to scalar polynomials: in the
next future, we plan to exploit our results and design new features for the software MPSolve.
When studying the theoretical properties of the change of variable, useful to design one of the structured EAI methods, we had the
chance to discover some theorems on the behaviour of the complete eigenstructure of a matrix polynomial under a rational change of
variable. Such results are discussed in this thesis.
Some, but not all, of the different structured versions of the EAI algorithm have a drawback: accuracy is lost for eigenvalues that
are close to a finite number of critical values, called exceptional eigenvalues. On the other
hand, it turns out that at least for some specific structures
the versions that suffer from this problem are also the most efficient ones: thus, it is desirable to circumvent the
loss of accuracy. This can
be done by the design of a structured refinement Newton algorithm. Besides
its application to structured PEPs, this algorithm can have further application to the computation of the
roots of scalar polynomials whose roots appear in pairs.
In this thesis, we also present the results of several
numerical experiments performed in order to test the effectiveness of
our approach in terms of speed and of accuracy. We have compared the
Ehrlich-Aberth iteration with the Matlab functions polyeig and quadeig. In the structured case, we have also considered, when
available, other structured methods, say, the URV algorithm by
Schroeder . Moreover, the different versions of our algorithm are compared one with another