Distilling high-accuracy Graph Neural Networks~(GNNs) to low-latency
multilayer perceptrons~(MLPs) on graph tasks has become a hot research topic.
However, MLPs rely exclusively on the node features and fail to capture the
graph structural information. Previous methods address this issue by processing
graph edges into extra inputs for MLPs, but such graph structures may be
unavailable for various scenarios. To this end, we propose a Prototype-Guided
Knowledge Distillation~(PGKD) method, which does not require graph
edges~(edge-free) yet learns structure-aware MLPs. Specifically, we analyze the
graph structural information in GNN teachers, and distill such information from
GNNs to MLPs via prototypes in an edge-free setting. Experimental results on
popular graph benchmarks demonstrate the effectiveness and robustness of the
proposed PGKD.Comment: 8 pages, 4 figures, 9 table