Adversarial training is one of the best-performing methods in improving the
robustness of deep language models. However, robust models come at the cost of
high time consumption, as they require multi-step gradient ascents or word
substitutions to obtain adversarial samples. In addition, these generated
samples are deficient in grammatical quality and semantic consistency, which
impairs the effectiveness of adversarial training. To address these problems,
we introduce a novel, effective procedure for instead adversarial training with
only clean data. Our procedure, distribution shift risk minimization (DSRM),
estimates the adversarial loss by perturbing the input data's probability
distribution rather than their embeddings. This formulation results in a robust
model that minimizes the expected global loss under adversarial attacks. Our
approach requires zero adversarial samples for training and reduces time
consumption by up to 70\% compared to current best-performing adversarial
training methods. Experiments demonstrate that DSRM considerably improves
BERT's resistance to textual adversarial attacks and achieves state-of-the-art
robust accuracy on various benchmarks.Comment: Accepted by ACL202