2 research outputs found
Complete the Missing Half: Augmenting Aggregation Filtering with Diversification for Graph Convolutional Networks
The core operation of current Graph Neural Networks (GNNs) is the aggregation
enabled by the graph Laplacian or message passing, which filters the
neighborhood node information. Though effective for various tasks, in this
paper, we show that they are potentially a problematic factor underlying all
GNN methods for learning on certain datasets, as they force the node
representations similar, making the nodes gradually lose their identity and
become indistinguishable. Hence, we augment the aggregation operations with
their dual, i.e. diversification operators that make the node more distinct and
preserve the identity. Such augmentation replaces the aggregation with a
two-channel filtering process that, in theory, is beneficial for enriching the
node representations. In practice, the proposed two-channel filters can be
easily patched on existing GNN methods with diverse training strategies,
including spectral and spatial (message passing) methods. In the experiments,
we observe desired characteristics of the models and significant performance
boost upon the baselines on 9 node classification tasks
When Do Graph Neural Networks Help with Node Classification: Investigating the Homophily Principle on Node Distinguishability
Homophily principle, i.e. nodes with the same labels are more likely to be
connected, has been believed to be the main reason for the performance
superiority of Graph Neural Networks (GNNs) over node-based Neural Networks on
Node Classification tasks. Recent research suggests that, even in the absence
of homophily, the advantage of GNNs still exists as long as nodes from the same
class share similar neighborhood patterns. However, this argument only
considers intra-class Node Distinguishability (ND) and neglects inter-class ND,
which provides incomplete understanding of homophily. In this paper, we first
demonstrate the aforementioned insufficiency with examples and argue that an
ideal situation for ND is to have smaller intra-class ND than inter-class ND.
To formulate this idea, we propose Contextual Stochastic Block Model for
Homophily (CSBM-H) and define two metrics, Probabilistic Bayes Error (PBE) and
negative generalized Jeffreys divergence, to quantify ND, through which we can
find how intra- and inter-class ND influence ND together. We visualize the
results and give detailed analysis. Through experiments, we verified that the
superiority of GNNs is indeed closely related to both intra- and inter-class ND
regardless of homophily levels, based on which we propose a new performance
metric beyond homophily, which is non-linear and feature-based. Experiments
indicate it significantly more effective than the existing homophily metrics on
revealing the advantage and disadvantage of GNNs on both synthetic and
benchmark real-world datasets