564 research outputs found
On the decay of the inverse of matrices that are sum of Kronecker products
Decay patterns of matrix inverses have recently attracted considerable
interest, due to their relevance in numerical analysis, and in applications
requiring matrix function approximations. In this paper we analyze the decay
pattern of the inverse of banded matrices in the form where is tridiagonal, symmetric and positive definite, is
the identity matrix, and stands for the Kronecker product. It is well
known that the inverses of banded matrices exhibit an exponential decay pattern
away from the main diagonal. However, the entries in show a
non-monotonic decay, which is not caught by classical bounds. By using an
alternative expression for , we derive computable upper bounds that
closely capture the actual behavior of its entries. We also show that similar
estimates can be obtained when has a larger bandwidth, or when the sum of
Kronecker products involves two different matrices. Numerical experiments
illustrating the new bounds are also reported
Efficient approximation of functions of some large matrices by partial fraction expansions
Some important applicative problems require the evaluation of functions
of large and sparse and/or \emph{localized} matrices . Popular and
interesting techniques for computing and , where
is a vector, are based on partial fraction expansions. However,
some of these techniques require solving several linear systems whose matrices
differ from by a complex multiple of the identity matrix for computing
or require inverting sequences of matrices with the same
characteristics for computing . Here we study the use and the
convergence of a recent technique for generating sequences of incomplete
factorizations of matrices in order to face with both these issues. The
solution of the sequences of linear systems and approximate matrix inversions
above can be computed efficiently provided that shows certain decay
properties. These strategies have good parallel potentialities. Our claims are
confirmed by numerical tests
Minimizing Communication for Eigenproblems and the Singular Value Decomposition
Algorithms have two costs: arithmetic and communication. The latter
represents the cost of moving data, either between levels of a memory
hierarchy, or between processors over a network. Communication often dominates
arithmetic and represents a rapidly increasing proportion of the total cost, so
we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds
were presented on the amount of communication required for essentially all
-like algorithms for linear algebra, including eigenvalue problems and
the SVD. Conventional algorithms, including those currently implemented in
(Sca)LAPACK, perform asymptotically more communication than these lower bounds
require. In this paper we present parallel and sequential eigenvalue algorithms
(for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms
that do attain these lower bounds, and analyze their convergence and
communication costs.Comment: 43 pages, 11 figure
Efficient cyclic reduction for QBDs with rank structured blocks
We provide effective algorithms for solving block tridiagonal block Toeplitz
systems with quasiseparable blocks, as well as quadratic matrix
equations with quasiseparable coefficients, based on cyclic
reduction and on the technology of rank-structured matrices. The algorithms
rely on the exponential decay of the singular values of the off-diagonal
submatrices generated by cyclic reduction. We provide a formal proof of this
decay in the Markovian framework. The results of the numerical experiments that
we report confirm a significant speed up over the general algorithms, already
starting with the moderately small size
Fast computation of spectral projectors of banded matrices
We consider the approximate computation of spectral projectors for symmetric
banded matrices. While this problem has received considerable attention,
especially in the context of linear scaling electronic structure methods, the
presence of small relative spectral gaps challenges existing methods based on
approximate sparsity. In this work, we show how a data-sparse approximation
based on hierarchical matrices can be used to overcome this problem. We prove a
priori bounds on the approximation error and propose a fast algo- rithm based
on the QDWH algorithm, along the works by Nakatsukasa et al. Numerical
experiments demonstrate that the performance of our algorithm is robust with
respect to the spectral gap. A preliminary Matlab implementation becomes faster
than eig already for matrix sizes of a few thousand.Comment: 27 pages, 10 figure
- …