Model-based deep learning has achieved astounding successes due in part to
the availability of large-scale realworld data. However, processing such
massive amounts of data comes at a considerable cost in terms of computations,
storage, training and the search for good neural architectures. Dataset
distillation has thus recently come to the fore. This paradigm involves
distilling information from large real-world datasets into tiny and compact
synthetic datasets such that processing the latter yields similar performances
as the former. State-of-the-art methods primarily rely on learning the
synthetic dataset by matching the gradients obtained during training between
the real and synthetic data. However, these gradient-matching methods suffer
from the accumulated trajectory error caused by the discrepancy between the
distillation and subsequent evaluation. To alleviate the adverse impact of this
accumulated trajectory error, we propose a novel approach that encourages the
optimization algorithm to seek a flat trajectory. We show that the weights
trained on synthetic data are robust against the accumulated errors
perturbations with the regularization towards the flat trajectory. Our method,
called Flat Trajectory Distillation (FTD), is shown to boost the performance of
gradient-matching methods by up to 4.7% on a subset of images of the ImageNet
dataset with higher resolution images. We also validate the effectiveness and
generalizability of our method with datasets of different resolutions and
demonstrate its applicability to neural architecture search