8,890 research outputs found
GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based Histogram Intersection
Graph neural networks are increasingly becoming the framework of choice for
graph-based machine learning. In this paper, we propose a new graph neural
network architecture that substitutes classical message passing with an
analysis of the local distribution of node features. To this end, we extract
the distribution of features in the egonet for each local neighbourhood and
compare them against a set of learned label distributions by taking the
histogram intersection kernel. The similarity information is then propagated to
other nodes in the network, effectively creating a message passing-like
mechanism where the message is determined by the ensemble of the features. We
perform an ablation study to evaluate the network's performance under different
choices of its hyper-parameters. Finally, we test our model on standard graph
classification and regression benchmarks, and we find that it outperforms
widely used alternative approaches, including both graph kernels and graph
neural networks
On the power of message passing for learning on graph-structured data
This thesis proposes novel approaches for machine learning on irregularly structured input data such as graphs, point clouds and manifolds. Specifically, we are breaking up with the regularity restriction of conventional deep learning techniques, and propose solutions in designing, implementing and scaling up deep end-to-end representation learning on graph-structured data, known as Graph Neural Networks (GNNs).
GNNs capture local graph structure and feature information by following a neural message passing scheme, in which node representations are recursively updated in a trainable and purely local fashion. In this thesis, we demonstrate the generality of message passing through a unified framework suitable for a wide range of operators and learning tasks. Specifically, we analyze the limitations and inherent weaknesses of GNNs and propose efficient solutions to overcome them, both theoretically and in practice, e.g., by conditioning messages via continuous B-spline kernels, by utilizing hierarchical message passing, or by leveraging positional encodings. In addition, we ensure that our proposed methods scale naturally to large input domains. In particular, we propose novel methods to fully eliminate the exponentially increasing dependency of nodes over layers inherent to message passing GNNs. Lastly, we introduce PyTorch Geometric, a deep learning library for implementing and working with graph-based neural network building blocks, built upon PyTorch
Dynamic Graph Message Passing Networks
Modelling long-range dependencies is critical for complex scene understanding
tasks such as semantic segmentation and object detection. Although CNNs have
excelled in many computer vision tasks, they are still limited in capturing
long-range structured relationships as they typically consist of layers of
local kernels. A fully-connected graph is beneficial for such modelling,
however, its computational overhead is prohibitive. We propose a dynamic graph
message passing network, based on the message passing neural network framework,
that significantly reduces the computational complexity compared to related
works modelling a fully-connected graph. This is achieved by adaptively
sampling nodes in the graph, conditioned on the input, for message passing.
Based on the sampled nodes, we then dynamically predict node-dependent filter
weights and the affinity matrix for propagating information between them. Using
this model, we show significant improvements with respect to strong,
state-of-the-art baselines on three different tasks and backbone architectures.
Our approach also outperforms fully-connected graphs while using substantially
fewer floating point operations and parameters.Comment: CVPR 2020 Ora
Dynamic Graph Message Passing Networks for Visual Recognition
Modelling long-range dependencies is critical for scene understanding tasks
in computer vision. Although convolution neural networks (CNNs) have excelled
in many vision tasks, they are still limited in capturing long-range structured
relationships as they typically consist of layers of local kernels. A
fully-connected graph, such as the self-attention operation in Transformers, is
beneficial for such modelling, however, its computational overhead is
prohibitive. In this paper, we propose a dynamic graph message passing network,
that significantly reduces the computational complexity compared to related
works modelling a fully-connected graph. This is achieved by adaptively
sampling nodes in the graph, conditioned on the input, for message passing.
Based on the sampled nodes, we dynamically predict node-dependent filter
weights and the affinity matrix for propagating information between them. This
formulation allows us to design a self-attention module, and more importantly a
new Transformer-based backbone network, that we use for both image
classification pretraining, and for addressing various downstream tasks (object
detection, instance and semantic segmentation). Using this model, we show
significant improvements with respect to strong, state-of-the-art baselines on
four different tasks. Our approach also outperforms fully-connected graphs
while using substantially fewer floating-point operations and parameters. Code
and models will be made publicly available at
https://github.com/fudan-zvg/DGMN2Comment: PAMI extension of CVPR 2020 oral work arXiv:1908.0695
- …