564 research outputs found

    On the decay of the inverse of matrices that are sum of Kronecker products

    Full text link
    Decay patterns of matrix inverses have recently attracted considerable interest, due to their relevance in numerical analysis, and in applications requiring matrix function approximations. In this paper we analyze the decay pattern of the inverse of banded matrices in the form S=M⊗In+In⊗MS=M \otimes I_n + I_n \otimes M where MM is tridiagonal, symmetric and positive definite, InI_n is the identity matrix, and ⊗\otimes stands for the Kronecker product. It is well known that the inverses of banded matrices exhibit an exponential decay pattern away from the main diagonal. However, the entries in S−1S^{-1} show a non-monotonic decay, which is not caught by classical bounds. By using an alternative expression for S−1S^{-1}, we derive computable upper bounds that closely capture the actual behavior of its entries. We also show that similar estimates can be obtained when MM has a larger bandwidth, or when the sum of Kronecker products involves two different matrices. Numerical experiments illustrating the new bounds are also reported

    Efficient approximation of functions of some large matrices by partial fraction expansions

    Full text link
    Some important applicative problems require the evaluation of functions Ψ\Psi of large and sparse and/or \emph{localized} matrices AA. Popular and interesting techniques for computing Ψ(A)\Psi(A) and Ψ(A)v\Psi(A)\mathbf{v}, where v\mathbf{v} is a vector, are based on partial fraction expansions. However, some of these techniques require solving several linear systems whose matrices differ from AA by a complex multiple of the identity matrix II for computing Ψ(A)v\Psi(A)\mathbf{v} or require inverting sequences of matrices with the same characteristics for computing Ψ(A)\Psi(A). Here we study the use and the convergence of a recent technique for generating sequences of incomplete factorizations of matrices in order to face with both these issues. The solution of the sequences of linear systems and approximate matrix inversions above can be computed efficiently provided that A−1A^{-1} shows certain decay properties. These strategies have good parallel potentialities. Our claims are confirmed by numerical tests

    Minimizing Communication for Eigenproblems and the Singular Value Decomposition

    Full text link
    Algorithms have two costs: arithmetic and communication. The latter represents the cost of moving data, either between levels of a memory hierarchy, or between processors over a network. Communication often dominates arithmetic and represents a rapidly increasing proportion of the total cost, so we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds were presented on the amount of communication required for essentially all O(n3)O(n^3)-like algorithms for linear algebra, including eigenvalue problems and the SVD. Conventional algorithms, including those currently implemented in (Sca)LAPACK, perform asymptotically more communication than these lower bounds require. In this paper we present parallel and sequential eigenvalue algorithms (for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms that do attain these lower bounds, and analyze their convergence and communication costs.Comment: 43 pages, 11 figure

    Efficient cyclic reduction for QBDs with rank structured blocks

    Full text link
    We provide effective algorithms for solving block tridiagonal block Toeplitz systems with m×mm\times m quasiseparable blocks, as well as quadratic matrix equations with m×mm\times m quasiseparable coefficients, based on cyclic reduction and on the technology of rank-structured matrices. The algorithms rely on the exponential decay of the singular values of the off-diagonal submatrices generated by cyclic reduction. We provide a formal proof of this decay in the Markovian framework. The results of the numerical experiments that we report confirm a significant speed up over the general algorithms, already starting with the moderately small size m≈102m\approx 10^2

    Fast computation of spectral projectors of banded matrices

    Full text link
    We consider the approximate computation of spectral projectors for symmetric banded matrices. While this problem has received considerable attention, especially in the context of linear scaling electronic structure methods, the presence of small relative spectral gaps challenges existing methods based on approximate sparsity. In this work, we show how a data-sparse approximation based on hierarchical matrices can be used to overcome this problem. We prove a priori bounds on the approximation error and propose a fast algo- rithm based on the QDWH algorithm, along the works by Nakatsukasa et al. Numerical experiments demonstrate that the performance of our algorithm is robust with respect to the spectral gap. A preliminary Matlab implementation becomes faster than eig already for matrix sizes of a few thousand.Comment: 27 pages, 10 figure
    • …
    corecore