Deeper-GXX: Deepening Arbitrary GNNs

Abstract

Shallow GNNs tend to have sub-optimal performance dealing with large-scale graphs or graphs with missing features. Therefore, it is necessary to increase the depth (i.e., the number of layers) of GNNs to capture more latent knowledge of the input data. On the other hand, including more layers in GNNs typically decreases their performance due to, e.g., vanishing gradient and oversmoothing. Existing methods (e.g., PairNorm and DropEdge) mainly focus on addressing oversmoothing, but they suffer from some drawbacks such as requiring hard-to-acquire knowledge or having large training randomness. In addition, these methods simply incorporate ResNet to address vanishing gradient. They ignore an important fact: by stacking more and more layers with ResNet architecture, the information collected from faraway neighbors becomes dominant, compared with the information collected from the 1-hop and 2-hop neighbors, thus resulting in severe performance degradation. In this paper, we first go deep into the architecture of ResNet and analyze why ResNet is not best suited for deeper GNNs. Then we propose a new residual architecture to attenuate the negative impact caused by ResNet. To address the drawbacks of these existing methods, we introduce the Topology-guided Graph Contrastive Loss named TGCL. It utilizes node topological information and pulls the connected node pairs closer via contrastive learning regularization to obtain discriminative node representations. Combining the new residual architecture with TGCL, an end-to-end framework named Deeper-GXX is proposed towards deeper GNNs. The extensive experiments on real-world data sets demonstrate the effectiveness and efficiency of Deeper-GXX compared with state-of-the-art baselines

    Similar works

    Full text

    thumbnail-image

    Available Versions