3 research outputs found

    Spectrum-Adapted Polynomial Approximation for Matrix Functions

    Full text link
    We propose and investigate two new methods to approximate f(A)bf({\bf A}){\bf b} for large, sparse, Hermitian matrices A{\bf A}. The main idea behind both methods is to first estimate the spectral density of A{\bf A}, and then find polynomials of a fixed order that better approximate the function ff on areas of the spectrum with a higher density of eigenvalues. Compared to state-of-the-art methods such as the Lanczos method and truncated Chebyshev expansion, the proposed methods tend to provide more accurate approximations of f(A)bf({\bf A}){\bf b} at lower polynomial orders, and for matrices A{\bf A} with a large number of distinct interior eigenvalues and a small spectral width

    Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks

    Full text link
    Recently, neural network based approaches have achieved significant improvement for solving large, complex, graph-structured problems. However, their bottlenecks still need to be addressed, and the advantages of multi-scale information and deep architectures have not been sufficiently exploited. In this paper, we theoretically analyze how existing Graph Convolutional Networks (GCNs) have limited expressive power due to the constraint of the activation functions and their architectures. We generalize spectral graph convolution and deep GCN in block Krylov subspace forms and devise two architectures, both with the potential to be scaled deeper but each making use of the multi-scale information in different ways. We further show that the equivalence of these two architectures can be established under certain conditions. On several node classification tasks, with or without the help of validation, the two new architectures achieve better performance compared to many state-of-the-art methods.Comment: Accepted and to be published by NeurIPS 201

    Limited-memory polynomial methods for large-scale matrix functions

    Full text link
    Matrix functions are a central topic of linear algebra, and problems requiring their numerical approximation appear increasingly often in scientific computing. We review various limited-memory methods for the approximation of the action of a large-scale matrix function on a vector. Emphasis is put on polynomial methods, whose memory requirements are known or prescribed a priori. Methods based on explicit polynomial approximation or interpolation, as well as restarted Arnoldi methods, are treated in detail. An overview of existing software is also given, as well as a discussion of challenging open problems.Comment: 25 pages, 2 figures, 4 algorithm
    corecore