19 research outputs found
Accelerated filtering on graphs using Lanczos method
Signal-processing on graphs has developed into a very active field of
research during the last decade. In particular, the number of applications
using frames constructed from graphs, like wavelets on graphs, has
substantially increased. To attain scalability for large graphs, fast
graph-signal filtering techniques are needed. In this contribution, we propose
an accelerated algorithm based on the Lanczos method that adapts to the
Laplacian spectrum without explicitly computing it. The result is an accurate,
robust, scalable and efficient algorithm. Compared to existing methods based on
Chebyshev polynomials, our solution achieves higher accuracy without increasing
the overall complexity significantly. Furthermore, it is particularly well
suited for graphs with large spectral gaps
Fast computation of spectral projectors of banded matrices
We consider the approximate computation of spectral projectors for symmetric
banded matrices. While this problem has received considerable attention,
especially in the context of linear scaling electronic structure methods, the
presence of small relative spectral gaps challenges existing methods based on
approximate sparsity. In this work, we show how a data-sparse approximation
based on hierarchical matrices can be used to overcome this problem. We prove a
priori bounds on the approximation error and propose a fast algo- rithm based
on the QDWH algorithm, along the works by Nakatsukasa et al. Numerical
experiments demonstrate that the performance of our algorithm is robust with
respect to the spectral gap. A preliminary Matlab implementation becomes faster
than eig already for matrix sizes of a few thousand.Comment: 27 pages, 10 figure
A Householder-based algorithm for Hessenberg-triangular reduction
The QZ algorithm for computing eigenvalues and eigenvectors of a matrix
pencil requires that the matrices first be reduced to
Hessenberg-triangular (HT) form. The current method of choice for HT reduction
relies entirely on Givens rotations regrouped and accumulated into small dense
matrices which are subsequently applied using matrix multiplication routines. A
non-vanishing fraction of the total flop count must nevertheless still be
performed as sequences of overlapping Givens rotations alternately applied from
the left and from the right. The many data dependencies associated with this
computational pattern leads to inefficient use of the processor and poor
scalability.
In this paper, we therefore introduce a fundamentally different approach that
relies entirely on (large) Householder reflectors partially accumulated into
block reflectors, by using (compact) WY representations. Even though the new
algorithm requires more floating point operations than the state of the art
algorithm, extensive experiments on both real and synthetic data indicate that
it is still competitive, even in a sequential setting. The new algorithm is
conjectured to have better parallel scalability, an idea which is partially
supported by early small-scale experiments using multi-threaded BLAS. The
design and evaluation of a parallel formulation is future work
Computing Functions of Symmetric Hierarchically Semiseparable Matrices
The aim of this work is to develop a fast algorithm for approximating the
matrix function of a square matrix that is symmetric and has
hierarchically semiseparable (HSS) structure. Appearing in a wide variety of
applications, often in the context of discretized (fractional) differential and
integral operators, HSS matrices have a number of attractive properties
facilitating the development of fast algorithms. In this work, we use an
unconventional telescopic decomposition of , inspired by recent work of
Levitt and Martinsson on approximating an HSS matrix from matrix-vector
products with a few random vectors. This telescopic decomposition allows us to
approximate by recursively performing low-rank updates with rational
Krylov subspaces while keeping the size of the matrices involved in the
rational Krylov subspaces small. In particular, no large-scale linear system
needs to be solved, which yields favorable complexity estimates and reduced
execution times compared to existing methods, including an existing
divide-and-conquer strategy. The advantages of our newly proposed algorithms
are demonstrated for a number of examples from the literature, featuring the
exponential, the inverse square root, and the sign function of a matrix. Even
for matrix inversion, our algorithm exhibits superior performance, even if not
specifically designed for this task