We present a simple and yet effective interpolation-based regularization
technique, aiming to improve the generalization of Graph Neural Networks (GNNs)
on supervised graph classification. We leverage Mixup, an effective regularizer
for vision, where random sample pairs and their labels are interpolated to
create synthetic images for training. Unlike images with grid-like coordinates,
graphs have arbitrary structure and topology, which can be very sensitive to
any modification that alters the graph's semantic meanings. This posts two
unanswered questions for Mixup-like regularization schemes: Can we directly mix
up a pair of graph inputs? If so, how well does such mixing strategy regularize
the learning of GNNs? To answer these two questions, we propose ifMixup, which
first adds dummy nodes to make two graphs have the same input size and then
simultaneously performs linear interpolation between the aligned node feature
vectors and the aligned edge representations of the two graphs. We empirically
show that such simple mixing schema can effectively regularize the
classification learning, resulting in superior predictive accuracy to popular
graph augmentation and GNN methods.Comment: To appear in AAAI202