51 research outputs found
Bilinear Graph Neural Network with Neighbor Interactions
Graph Neural Network (GNN) is a powerful model to learn representations and
make predictions on graph data. Existing efforts on GNN have largely defined
the graph convolution as a weighted sum of the features of the connected nodes
to form the representation of the target node. Nevertheless, the operation of
weighted sum assumes the neighbor nodes are independent of each other, and
ignores the possible interactions between them. When such interactions exist,
such as the co-occurrence of two neighbor nodes is a strong signal of the
target node's characteristics, existing GNN models may fail to capture the
signal. In this work, we argue the importance of modeling the interactions
between neighbor nodes in GNN. We propose a new graph convolution operator,
which augments the weighted sum with pairwise interactions of the
representations of neighbor nodes. We term this framework as Bilinear Graph
Neural Network (BGNN), which improves GNN representation ability with bilinear
interactions between neighbor nodes. In particular, we specify two BGNN models
named BGCN and BGAT, based on the well-known GCN and GAT, respectively.
Empirical results on three public benchmarks of semi-supervised node
classification verify the effectiveness of BGNN -- BGCN (BGAT) outperforms GCN
(GAT) by 1.6% (1.5%) in classification accuracy.Codes are available at:
https://github.com/zhuhm1996/bgnn.Comment: Accepted by IJCAI 2020. SOLE copyright holder is IJCAI (International
Joint Conferences on Artificial Intelligence), all rights reserve
Multi-Scale Relational Graph Convolutional Network for Multiple Instance Learning in Histopathology Images
Graph convolutional neural networks have shown significant potential in
natural and histopathology images. However, their use has only been studied in
a single magnification or multi-magnification with late fusion. In order to
leverage the multi-magnification information and early fusion with graph
convolutional networks, we handle different embedding spaces at each
magnification by introducing the Multi-Scale Relational Graph Convolutional
Network (MS-RGCN) as a multiple instance learning method. We model
histopathology image patches and their relation with neighboring patches and
patches at other scales (i.e., magnifications) as a graph. To pass the
information between different magnification embedding spaces, we define
separate message-passing neural networks based on the node and edge type. We
experiment on prostate cancer histopathology images to predict the grade groups
based on the extracted features from patches. We also compare our MS-RGCN with
multiple state-of-the-art methods with evaluations on several source and
held-out datasets. Our method outperforms the state-of-the-art on all of the
datasets and image types consisting of tissue microarrays, whole-mount slide
regions, and whole-slide images. Through an ablation study, we test and show
the value of the pertinent design features of the MS-RGCN
- …