14 research outputs found

    Differential qd algorithm with shifts for rank-structured matrices

    Full text link
    Although QR iterations dominate in eigenvalue computations, there are several important cases when alternative LR-type algorithms may be preferable. In particular, in the symmetric tridiagonal case where differential qd algorithm with shifts (dqds) proposed by Fernando and Parlett enjoys often faster convergence while preserving high relative accuracy (that is not guaranteed in QR algorithm). In eigenvalue computations for rank-structured matrices QR algorithm is also a popular choice since, in the symmetric case, the rank structure is preserved. In the unsymmetric case, however, QR algorithm destroys the rank structure and, hence, LR-type algorithms come to play once again. In the current paper we discover several variants of qd algorithms for quasiseparable matrices. Remarkably, one of them, when applied to Hessenberg matrices becomes a direct generalization of dqds algorithm for tridiagonal matrices. Therefore, it can be applied to such important matrices as companion and confederate, and provides an alternative algorithm for finding roots of a polynomial represented in the basis of orthogonal polynomials. Results of preliminary numerical experiments are presented

    Row Compression and Nested Product Decomposition of a Hierarchical Representation of a Quasiseparable Matrix

    Get PDF
    This research introduces a row compression and nested product decomposition of an nxn hierarchical representation of a rank structured matrix A, which extends the compression and nested product decomposition of a quasiseparable matrix. The hierarchical parameter extraction algorithm of a quasiseparable matrix is efficient, requiring only O(nlog(n))operations, and is proven backward stable. The row compression is comprised of a sequence of small Householder transformations that are formed from the low-rank, lower triangular, off-diagonal blocks of the hierarchical representation. The row compression forms a factorization of matrix A, where A = QC, Q is the product of the Householder transformations, and C preserves the low-rank structure in both the lower and upper triangular parts of matrix A. The nested product decomposition is accomplished by applying a sequence of orthogonal transformations to the low-rank, upper triangular, off-diagonal blocks of the compressed matrix C. Both the compression and decomposition algorithms are stable, and require O(nlog(n)) operations. At this point, the matrix-vector product and solver algorithms are the only ones fully proven to be backward stable for quasiseparable matrices. By combining the fast matrix-vector product and system solver, linear systems involving the hierarchical representation to nested product decomposition are directly solved with linear complexity and unconditional stability. Applications in image deblurring and compression, that capitalize on the concepts from the row compression and nested product decomposition algorithms, will be shown

    Orthogonal Cauchy-like matrices

    Get PDF
    Cauchy-like matrices arise often as building blocks in decomposition formulas and fast algorithms for various displacement-structured matrices. A complete characterization for orthogonal Cauchy-like matrices is given here. In particular, we show that orthogonal Cauchy-like matrices correspond to eigenvector matrices of certain symmetric matrices related to the solution of secular equations. Moreover, the construction of orthogonal Cauchy-like matrices is related to that of orthogonal rational functions with variable poles

    Structured generalized eigenvalue condition numbers for parameterized quasiseparable matrices

    Get PDF
    Abstract(#br)In this paper, when A and B are {1;1}-quasiseparable matrices, we consider the structured generalized relative eigenvalue condition numbers of the pair (A, B)(A, \, B) ( A , B ) with respect to relative perturbations of the parameters defining A and B in the quasiseparable and the Givens-vector representations of these matrices. A general expression is derived for the condition number of the generalized eigenvalue problems of the pair (A, B)(A,\, B) ( A , B ) , where A and B are any differentiable function of a vector of parameters with respect to perturbations of such parameters. Moreover, the explicit expressions of the corresponding structured condition numbers with respect to the quasiseparable and Givens-vector..

    Structured condition numbers for parameterized quasiseparable matrices

    Get PDF
    Low-rank structured matrices have attracted much attention in the last decades, since they arise in many applications and all share the fundamental property that can be represented by O(n) parameters, where n x n is the size of the matrix. This property has allowed the development of fast algorithms for solving numerically many problems involving low-rank structured matrices by performing operations on the parameters describing the matrices, instead of directly on the matrix entries. Among these problems the solution of linear systems of equations and the computation of the eigenvalues are probably the most basic and relevant ones. Therefore, it is important to measure, via structured computable condition numbers, the relative sensitivity of the solutions of linear systems with low-rank structured coefficient matrices, and of the eigenvalues of those matrices, with respect to relative perturbations of the parameters representing such matrices, since this sensitivity determines the maximum accuracy attainable by fast algorithms and allows us to decide which set of parameters is the most convenient from the point of view of accuracy. In this PhD Thesis we develop and analyze condition numbers for eigenvalues of low-rank matrices and for the solutions of linear systems involving such matrices. To this purpose, general expressions are obtained for the condition numbers of the solution of a linear system of equations whose coefficient matrix is any differentiable function of a vector of parameters with respect to perturbations of such parameters, and also for the eigenvalues of those matrices. Since there are many different classes of low-rank structured matrices and many different types of parameters describing them, it is not possible to cover all of them in this thesis. Therefore, the general expressions of the condition numbers are particularized to the important case of quasiseparable matrices and to the quasiseparable and the Givens-vector representations. In the case of {1,1}-quasiseparable matrices, we provide explicit expressions of the corresponding condition numbers for these two representations that can be estimated in O(n) operations. In addition, detailed theoretical and numerical comparisons of the condition numbers with respect to these two representations between themselves, and with respect to unstructured condition numbers are provided. These comparisons show that there are situations in which the unstructured condition numbers are much larger than the structured ones, but that the opposite never happens (...). The approach presented in this dissertation can be generalized to other classes of low-rank structured matrices and parameterizations, as well as to any class of structured matrices that can be represented by parameters, independently of whether or not they enjoy a “low-rank” structure.Programa Oficial de Doctorado en Ingeniería MatemáticaPresidente: Ana María Urbano Salvador.- Secretario: Fernando de Terán Vergara.- Vocal: J. Javier Martínez Fernández-Hera

    Exploiting rank structures for the numerical treatment of matrix polynomials

    Get PDF

    Numerical solution of large-scale linear matrix equations

    Get PDF
    We are interested in the numerical solution of large-scale linear matrix equations. In particular, due to their occurrence in many applications, we study the so-called Sylvester and Lyapunov equations. A characteristic aspect of the large-scale setting is that although data are sparse, the solution is in general dense so that storing it may be unfeasible. Therefore, it is necessary that the solution allows for a memory-saving approximation that can be cheaply stored. An extensive literature treats the case of the aforementioned equations with low-rank right-hand side. This assumption, together with certain hypotheses on the spectral distribution of the matrix coefficients, is a sufficient condition for proving a fast decay in the singular values of the solution. This decay motivates the search for a low-rank approximation so that only low-rank matrices are actually computed and stored remarkably reducing the storage demand. This is the task of the so-called low-rank methods and a large amount of work in this direction has been carried out in the last years. Projection methods have been shown to be among the most effective low-rank methods and in the first part of this thesis we propose some computational enhanchements of the classical algorithms. The case of equations with not necessarily low rank right-hand side has not been deeply analyzed so far and efficient methods are still lacking in the literature. In this thesis we aim to significantly contribute to this open problem by introducing solution methods for this kind of equations. In particular, we address the case when the coefficient matrices and the right-hand side are banded and we further generalize this structure considering quasiseparable data. In the last part of the thesis we study large-scale generalized Sylvester equations and, under some assumptions on the coefficient matrices, novel approximation spaces for their solution by projection are proposed

    TR-2013010: Transformations of Matrix Structures Work Again II

    Full text link
    corecore