When Do Graph Neural Networks Help with Node Classification: Investigating the Homophily Principle on Node Distinguishability

Abstract

Homophily principle, i.e. nodes with the same labels are more likely to be connected, has been believed to be the main reason for the performance superiority of Graph Neural Networks (GNNs) over node-based Neural Networks on Node Classification tasks. Recent research suggests that, even in the absence of homophily, the advantage of GNNs still exists as long as nodes from the same class share similar neighborhood patterns. However, this argument only considers intra-class Node Distinguishability (ND) and neglects inter-class ND, which provides incomplete understanding of homophily. In this paper, we first demonstrate the aforementioned insufficiency with examples and argue that an ideal situation for ND is to have smaller intra-class ND than inter-class ND. To formulate this idea, we propose Contextual Stochastic Block Model for Homophily (CSBM-H) and define two metrics, Probabilistic Bayes Error (PBE) and negative generalized Jeffreys divergence, to quantify ND, through which we can find how intra- and inter-class ND influence ND together. We visualize the results and give detailed analysis. Through experiments, we verified that the superiority of GNNs is indeed closely related to both intra- and inter-class ND regardless of homophily levels, based on which we propose a new performance metric beyond homophily, which is non-linear and feature-based. Experiments indicate it significantly more effective than the existing homophily metrics on revealing the advantage and disadvantage of GNNs on both synthetic and benchmark real-world datasets

    Similar works

    Full text

    thumbnail-image

    Available Versions