3 research outputs found
Spectrum-Adapted Polynomial Approximation for Matrix Functions
We propose and investigate two new methods to approximate
for large, sparse, Hermitian matrices . The main idea behind both
methods is to first estimate the spectral density of , and then find
polynomials of a fixed order that better approximate the function on areas
of the spectrum with a higher density of eigenvalues. Compared to
state-of-the-art methods such as the Lanczos method and truncated Chebyshev
expansion, the proposed methods tend to provide more accurate approximations of
at lower polynomial orders, and for matrices with
a large number of distinct interior eigenvalues and a small spectral width
Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks
Recently, neural network based approaches have achieved significant
improvement for solving large, complex, graph-structured problems. However,
their bottlenecks still need to be addressed, and the advantages of multi-scale
information and deep architectures have not been sufficiently exploited. In
this paper, we theoretically analyze how existing Graph Convolutional Networks
(GCNs) have limited expressive power due to the constraint of the activation
functions and their architectures. We generalize spectral graph convolution and
deep GCN in block Krylov subspace forms and devise two architectures, both with
the potential to be scaled deeper but each making use of the multi-scale
information in different ways. We further show that the equivalence of these
two architectures can be established under certain conditions. On several node
classification tasks, with or without the help of validation, the two new
architectures achieve better performance compared to many state-of-the-art
methods.Comment: Accepted and to be published by NeurIPS 201
Limited-memory polynomial methods for large-scale matrix functions
Matrix functions are a central topic of linear algebra, and problems
requiring their numerical approximation appear increasingly often in scientific
computing. We review various limited-memory methods for the approximation of
the action of a large-scale matrix function on a vector. Emphasis is put on
polynomial methods, whose memory requirements are known or prescribed a priori.
Methods based on explicit polynomial approximation or interpolation, as well as
restarted Arnoldi methods, are treated in detail. An overview of existing
software is also given, as well as a discussion of challenging open problems.Comment: 25 pages, 2 figures, 4 algorithm