Dataset distillation methods aim to compress a large dataset into a small set
of synthetic samples, such that when being trained on, competitive performances
can be achieved compared to regular training on the entire dataset. Among
recently proposed methods, Matching Training Trajectories (MTT) achieves
state-of-the-art performance on CIFAR-10/100, while having difficulty scaling
to ImageNet-1k dataset due to the large memory requirement when performing
unrolled gradient computation through back-propagation. Surprisingly, we show
that there exists a procedure to exactly calculate the gradient of the
trajectory matching loss with constant GPU memory requirement (irrelevant to
the number of unrolled steps). With this finding, the proposed memory-efficient
trajectory matching method can easily scale to ImageNet-1K with 6x memory
reduction while introducing only around 2% runtime overhead than original MTT.
Further, we find that assigning soft labels for synthetic images is crucial for
the performance when scaling to larger number of categories (e.g., 1,000) and
propose a novel soft label version of trajectory matching that facilities
better aligning of model training trajectories on large datasets. The proposed
algorithm not only surpasses previous SOTA on ImageNet-1K under extremely low
IPCs (Images Per Class), but also for the first time enables us to scale up to
50 IPCs on ImageNet-1K. Our method (TESLA) achieves 27.9% testing accuracy, a
remarkable +18.2% margin over prior arts.Comment: ICLR 2023 submission link: https://openreview.net/forum?id=dN70O8pmW