To bridge the gaps between topology-aware Graph Neural Networks (GNNs) and
inference-efficient Multi-Layer Perceptron (MLPs), GLNN proposes to distill
knowledge from a well-trained teacher GNN into a student MLP. Despite their
great progress, comparatively little work has been done to explore the
reliability of different knowledge points (nodes) in GNNs, especially their
roles played during distillation. In this paper, we first quantify the
knowledge reliability in GNN by measuring the invariance of their information
entropy to noise perturbations, from which we observe that different knowledge
points (1) show different distillation speeds (temporally); (2) are
differentially distributed in the graph (spatially). To achieve reliable
distillation, we propose an effective approach, namely Knowledge-inspired
Reliable Distillation (KRD), that models the probability of each node being an
informative and reliable knowledge point, based on which we sample a set of
additional reliable knowledge points as supervision for training student MLPs.
Extensive experiments show that KRD improves over the vanilla MLPs by 12.62%
and outperforms its corresponding teacher GNNs by 2.16% averaged over 7
datasets and 3 GNN architectures