4 research outputs found
Constant Time Graph Neural Networks
The recent advancements in graph neural networks (GNNs) have led to
state-of-the-art performances in various applications, including
chemo-informatics, question-answering systems, and recommender systems.
However, scaling up these methods to huge graphs, such as social networks and
Web graphs, remains a challenge. In particular, the existing methods for
accelerating GNNs either are not theoretically guaranteed in terms of the
approximation error or incur at least a linear time computation cost. In this
study, we reveal the query complexity of the uniform node sampling scheme for
Message Passing Neural Networks including GraphSAGE, graph attention networks
(GATs), and graph convolutional networks (GCNs). Surprisingly, our analysis
reveals that the complexity of the node sampling method is completely
independent of the number of the nodes, edges, and neighbors of the input and
depends only on the error tolerance and confidence probability while providing
a theoretical guarantee for the approximation error. To the best of our
knowledge, this is the first paper to provide a theoretical guarantee of
approximation for GNNs within constant time. Through experiments with synthetic
and real-world datasets, we investigated the speed and precision of the node
sampling scheme and validated our theoretical results
Adaptive Propagation Graph Convolutional Network
Graph convolutional networks (GCNs) are a family of neural network models
that perform inference on graph data by interleaving vertex-wise operations and
message-passing exchanges across nodes. Concerning the latter, two key
questions arise: (i) how to design a differentiable exchange protocol (e.g., a
1-hop Laplacian smoothing in the original GCN), and (ii) how to characterize
the trade-off in complexity with respect to the local updates. In this paper,
we show that state-of-the-art results can be achieved by adapting the number of
communication steps independently at every node. In particular, we endow each
node with a halting unit (inspired by Graves' adaptive computation time) that
after every exchange decides whether to continue communicating or not. We show
that the proposed adaptive propagation GCN (AP-GCN) achieves superior or
similar results to the best proposed models so far on a number of benchmarks,
while requiring a small overhead in terms of additional parameters. We also
investigate a regularization term to enforce an explicit trade-off between
communication and accuracy. The code for the AP-GCN experiments is released as
an open-source library.Comment: Published in IEEE Transaction on Neural Networks and Learning System
Scaling Graph Neural Networks with Approximate PageRank
Graph neural networks (GNNs) have emerged as a powerful approach for solving
many network mining tasks. However, learning on large graphs remains a
challenge - many recently proposed scalable GNN approaches rely on an expensive
message-passing procedure to propagate information through the graph. We
present the PPRGo model which utilizes an efficient approximation of
information diffusion in GNNs resulting in significant speed gains while
maintaining state-of-the-art prediction performance. In addition to being
faster, PPRGo is inherently scalable, and can be trivially parallelized for
large datasets like those found in industry settings. We demonstrate that PPRGo
outperforms baselines in both distributed and single-machine training
environments on a number of commonly used academic graphs. To better analyze
the scalability of large-scale graph learning methods, we introduce a novel
benchmark graph with 12.4 million nodes, 173 million edges, and 2.8 million
node features. We show that training PPRGo from scratch and predicting labels
for all nodes in this graph takes under 2 minutes on a single machine, far
outpacing other baselines on the same graph. We discuss the practical
application of PPRGo to solve large-scale node classification problems at
Google.Comment: Published as a Conference Paper at ACM SIGKDD 202
A Survey on The Expressive Power of Graph Neural Networks
Graph neural networks (GNNs) are effective machine learning models for
various graph learning problems. Despite their empirical successes, the
theoretical limitations of GNNs have been revealed recently. Consequently, many
GNN models have been proposed to overcome these limitations. In this survey, we
provide a comprehensive overview of the expressive power of GNNs and provably
powerful variants of GNNs.Comment: 42 page