Graph Neural Networks (GNNs) are a promising deep learning approach for
circumventing many real-world problems on graph-structured data. However, these
models usually have at least one of four fundamental limitations:
over-smoothing, over-fitting, difficult to train, and strong homophily
assumption. For example, Simple Graph Convolution (SGC) is known to suffer from
the first and fourth limitations. To tackle these limitations, we identify a
set of key designs including (D1) dilated convolution, (D2) multi-channel
learning, (D3) self-attention score, and (D4) sign factor to boost learning
from different types (i.e. homophily and heterophily) and scales (i.e. small,
medium, and large) of networks, and combine them into a graph neural network,
GPNet, a simple and efficient one-layer model. We theoretically analyze the
model and show that it can approximate various graph filters by adjusting the
self-attention score and sign factor. Experiments show that GPNet consistently
outperforms baselines in terms of average rank, average accuracy, complexity,
and parameters on semi-supervised and full-supervised tasks, and achieves
competitive performance compared to state-of-the-art model with inductive
learning task.Comment: 15 pages, 15 figure