213 research outputs found

    Quasiseparable Hessenberg reduction of real diagonal plus low rank matrices and applications

    Full text link
    We present a novel algorithm to perform the Hessenberg reduction of an n×nn\times n matrix AA of the form A=D+UV∗A = D + UV^* where DD is diagonal with real entries and UU and VV are n×kn\times k matrices with k≤nk\le n. The algorithm has a cost of O(n2k)O(n^2k) arithmetic operations and is based on the quasiseparable matrix technology. Applications are shown to solving polynomial eigenvalue problems and some numerical experiments are reported in order to analyze the stability of the approac

    Multilevel quasiseparable matrices in PDE-constrained optimization

    Get PDF
    Optimization problems with constraints in the form of a partial differential equation arise frequently in the process of engineering design. The discretization of PDE-constrained optimization problems results in large-scale linear systems of saddle-point type. In this paper we propose and develop a novel approach to solving such systems by exploiting so-called quasiseparable matrices. One may think of a usual quasiseparable matrix as of a discrete analog of the Green's function of a one-dimensional differential operator. Nice feature of such matrices is that almost every algorithm which employs them has linear complexity. We extend the application of quasiseparable matrices to problems in higher dimensions. Namely, we construct a class of preconditioners which can be computed and applied at a linear computational cost. Their use with appropriate Krylov methods leads to algorithms of nearly linear complexity

    Groups of banded matrices with banded inverses

    Get PDF
    A product A=F[subscript 1]...F[subscript N] of invertible block-diagonal matrices will be banded with a banded inverse. We establish this factorization with the number N controlled by the bandwidths w and not by the matrix size n. When A is an orthogonal matrix, or a permutation, or banded plus finite rank, the factors F[subscript i] have w=1 and generate that corresponding group. In the case of infinite matrices, conjectures remain open

    A Parallel Hierarchical Blocked Adaptive Cross Approximation Algorithm

    Get PDF
    This paper presents a hierarchical low-rank decomposition algorithm assuming any matrix element can be computed in O(1)O(1) time. The proposed algorithm computes rank-revealing decompositions of sub-matrices with a blocked adaptive cross approximation (BACA) algorithm, followed by a hierarchical merge operation via truncated singular value decompositions (H-BACA). The proposed algorithm significantly improves the convergence of the baseline ACA algorithm and achieves reduced computational complexity compared to the full decompositions such as rank-revealing QR decompositions. Numerical results demonstrate the efficiency, accuracy and parallel efficiency of the proposed algorithm

    Rank and inertia of submatrices of the Moore-Penrose inverse of a Hermitian matrix

    Full text link

    The Main Diagonal of a Permutation Matrix

    Get PDF
    By counting 1's in the "right half" of 2w2w consecutive rows, we locate the main diagonal of any doubly infinite permutation matrix with bandwidth ww. Then the matrix can be correctly centered and factored into block-diagonal permutation matrices. Part II of the paper discusses the same questions for the much larger class of band-dominated matrices. The main diagonal is determined by the Fredholm index of a singly infinite submatrix. Thus the main diagonal is determined "at infinity" in general, but from only 2w2w rows for banded permutations
    • …
    corecore