135 research outputs found

    Using deflation in the pole assignment problem with output feedback

    Get PDF
    A direct algorithm is suggested for the computation of a linear output feedback for a multi input, multi output system such that the resultant closed-loop matrix has eigenvalues that include a specified set of eigenvalues. The algorithm uses deflation based on unitary similarity transformations. Thus researchers hope the algorithm is numerically stable; however, this has not been proven as yet

    Inverse eigenvalue problems for extended Hessenberg and extended tridiagonal matrices

    Get PDF
    In inverse eigenvalue problems one tries to reconstruct a matrix, satisfying some constraints, given some spectral information. Here, two inverse eigenvalue problems are solved. First, given the eigenvalues and the first components of the associated eigenvectors (called the weight vector) an extended Hessenberg matrix with prescribed poles is computed possessing these eigenvalues and satisfying the eigenvector constraints. The extended Hessenberg matrix is retrieved by executing particularly designed unitary similarity transformations on the diagonal matrix containing the eigenvalues. This inverse problem closely links to orthogonal rational functions: the extended Hessenberg matrix contains the recurrence coefficients given the nodes (eigenvalues), poles (poles of the extended Hessenberg matrix), and a weight vector (first eigenvector components) determining the discrete inner product. Moreover, it is also sort of the inverse of the (rational) Arnoldi algorithm: instead of using the (rational) Arnoldi method to compute a Krylov basis to approximate the spectrum, we will reconstruct the orthogonal Krylov basis given the spectral info. In the second inverse eigenvalue problem, we do the same, but refrain from unitarity. As a result we execute possibly non-unitary similarity transformations on the diagonal matrix of eigenvalues to retrieve a (non)-symmetric extended tridiagonal matrix. The algorithm will be less stable, but it will be faster, as the extended tridiagonal matrix admits a low cost factorization of O(n) (n equals the number of eigenvalues), whereas the extended Hessenberg matrix does not. Again there is a close link with orthogonal function theory, the extended tridiagonal matrix captures the recurrence coefficients of bi-orthogonal rational functions. Moreover, it is again sort of inverse of the nonsymmetric Lanczos algorithm: given spectral properties, we reconstruct the two basis Krylov matrices linked to the nonsymmetric Lanczos algorithm. © 2014 Elsevier B.V. All rights reserved

    A Classification of Non-Hermitian Random Matrices

    Full text link
    We present a classification of non-hermitian random matrices based on implementing commuting discrete symmetries. It contains 38 classes. This generalizes the classification of hermitian random matrices due to Altland-Zirnbauer and it also extends the Ginibre ensembles of non-hermitian matrices.Comment: 8 pages, contribution to the proceedings of the NATO Advanced Research Workshop on Statistical Field Theories, Como (Italy), 18-23 June 2001. Compared to our 2001 version, we corrected two misprints in one table that in the previous version led us to miscount the number of classes as 43 whereas it should have been 38. Explicit details of the classification are unchange

    Computing approximate (block) rational Krylov subspaces without explicit inversion with extensions to symmetric matrices

    Get PDF
    It has been shown that approximate extended Krylov subspaces can be computed, under certain assumptions, without any explicit inversion or system solves. Instead, the vectors spanning the extended Krylov space are retrieved in an implicit way, via unitary similarity transformations, from an enlarged Krylov subspace. In this paper this approach is generalized to rational Krylov subspaces, which aside from poles at infinity and zero, also contain finite non-zero poles. Furthermore, the algorithms are generalized to deal with block rational Krylov subspaces and techniques to exploit the symmetry when working with Hermitian matrices are also presented. For each variant of the algorithm numerical experiments illustrate the power of the new approach. The experiments involve matrix functions, Ritz-value computations, and the solutions of matrix equations

    A canonical form for nonderogatory matrices under unitary similarity

    Get PDF
    A square matrix is nonderogatory if its Jordan blocks have distinct eigenvalues. We give canonical forms (i) for nonderogatory complex matrices up to unitary similarity and (ii) for pairs of complex matrices up to similarity, in which one matrix has distinct eigenvalues. The types of these canonical forms are given by undirected and, respectively, directed graphs with no undirected cycles.Comment: 18 page
    corecore