In this paper, we study the \textit{graph condensation} problem by
compressing the large, complex graph into a concise, synthetic representation
that preserves the most essential and discriminative information of structure
and features. We seminally propose the concept of Shock Absorber (a type of
perturbation) that enhances the robustness and stability of the original graphs
against changes in an adversarial training fashion. Concretely, (I) we forcibly
match the gradients between pre-selected graph neural networks (GNNs) trained
on a synthetic, simplified graph and the original training graph at regularly
spaced intervals. (II) Before each update synthetic graph point, a Shock
Absorber serves as a gradient attacker to maximize the distance between the
synthetic dataset and the original graph by selectively perturbing the parts
that are underrepresented or insufficiently informative. We iteratively repeat
the above two processes (I and II) in an adversarial training fashion to
maintain the highly-informative context without losing correlation with the
original dataset. More importantly, our shock absorber and the synthesized
graph parallelly share the backward process in a free training manner. Compared
to the original adversarial training, it introduces almost no additional time
overhead.
We validate our framework across 8 datasets (3 graph and 5 node
classification datasets) and achieve prominent results: for example, on Cora,
Citeseer and Ogbn-Arxiv, we can gain nearly 1.13% to 5.03% improvements compare
with SOTA models. Moreover, our algorithm adds only about 0.2% to 2.2%
additional time overhead over Flicker, Citeseer and Ogbn-Arxiv. Compared to the
general adversarial training, our approach improves time efficiency by nearly
4-fold