8 research outputs found

    Row Compression and Nested Product Decomposition of a Hierarchical Representation of a Quasiseparable Matrix

    Get PDF
    This research introduces a row compression and nested product decomposition of an nxn hierarchical representation of a rank structured matrix A, which extends the compression and nested product decomposition of a quasiseparable matrix. The hierarchical parameter extraction algorithm of a quasiseparable matrix is efficient, requiring only O(nlog(n))operations, and is proven backward stable. The row compression is comprised of a sequence of small Householder transformations that are formed from the low-rank, lower triangular, off-diagonal blocks of the hierarchical representation. The row compression forms a factorization of matrix A, where A = QC, Q is the product of the Householder transformations, and C preserves the low-rank structure in both the lower and upper triangular parts of matrix A. The nested product decomposition is accomplished by applying a sequence of orthogonal transformations to the low-rank, upper triangular, off-diagonal blocks of the compressed matrix C. Both the compression and decomposition algorithms are stable, and require O(nlog(n)) operations. At this point, the matrix-vector product and solver algorithms are the only ones fully proven to be backward stable for quasiseparable matrices. By combining the fast matrix-vector product and system solver, linear systems involving the hierarchical representation to nested product decomposition are directly solved with linear complexity and unconditional stability. Applications in image deblurring and compression, that capitalize on the concepts from the row compression and nested product decomposition algorithms, will be shown

    Author index to volumes 301–400

    Get PDF

    Numerical solution of large-scale linear matrix equations

    Get PDF
    We are interested in the numerical solution of large-scale linear matrix equations. In particular, due to their occurrence in many applications, we study the so-called Sylvester and Lyapunov equations. A characteristic aspect of the large-scale setting is that although data are sparse, the solution is in general dense so that storing it may be unfeasible. Therefore, it is necessary that the solution allows for a memory-saving approximation that can be cheaply stored. An extensive literature treats the case of the aforementioned equations with low-rank right-hand side. This assumption, together with certain hypotheses on the spectral distribution of the matrix coefficients, is a sufficient condition for proving a fast decay in the singular values of the solution. This decay motivates the search for a low-rank approximation so that only low-rank matrices are actually computed and stored remarkably reducing the storage demand. This is the task of the so-called low-rank methods and a large amount of work in this direction has been carried out in the last years. Projection methods have been shown to be among the most effective low-rank methods and in the first part of this thesis we propose some computational enhanchements of the classical algorithms. The case of equations with not necessarily low rank right-hand side has not been deeply analyzed so far and efficient methods are still lacking in the literature. In this thesis we aim to significantly contribute to this open problem by introducing solution methods for this kind of equations. In particular, we address the case when the coefficient matrices and the right-hand side are banded and we further generalize this structure considering quasiseparable data. In the last part of the thesis we study large-scale generalized Sylvester equations and, under some assumptions on the coefficient matrices, novel approximation spaces for their solution by projection are proposed
    corecore