Pre-training is known to generate universal representations for downstream
tasks in large-scale deep learning such as large language models. Existing
literature, e.g., \cite{kim2020adversarial}, empirically observe that the
downstream tasks can inherit the adversarial robustness of the pre-trained
model. We provide theoretical justifications for this robustness inheritance
phenomenon. Our theoretical results reveal that feature purification plays an
important role in connecting the adversarial robustness of the pre-trained
model and the downstream tasks in two-layer neural networks. Specifically, we
show that (i) with adversarial training, each hidden node tends to pick only
one (or a few) feature; (ii) without adversarial training, the hidden nodes can
be vulnerable to attacks. This observation is valid for both supervised
pre-training and contrastive learning. With purified nodes, it turns out that
clean training is enough to achieve adversarial robustness in downstream tasks.Comment: To appear in AISTATS202