Fast Adversarial Training (FAT) has gained increasing attention within the
research community owing to its efficacy in improving adversarial robustness.
Particularly noteworthy is the challenge posed by catastrophic overfitting (CO)
in this field. Although existing FAT approaches have made strides in mitigating
CO, the ascent of adversarial robustness occurs with a non-negligible decline
in classification accuracy on clean samples. To tackle this issue, we initially
employ the feature activation differences between clean and adversarial
examples to analyze the underlying causes of CO. Intriguingly, our findings
reveal that CO can be attributed to the feature coverage induced by a few
specific pathways. By intentionally manipulating feature activation differences
in these pathways with well-designed regularization terms, we can effectively
mitigate and induce CO, providing further evidence for this observation.
Notably, models trained stably with these terms exhibit superior performance
compared to prior FAT work. On this basis, we harness CO to achieve `attack
obfuscation', aiming to bolster model performance. Consequently, the models
suffering from CO can attain optimal classification accuracy on both clean and
adversarial data when adding random noise to inputs during evaluation. We also
validate their robustness against transferred adversarial examples and the
necessity of inducing CO to improve robustness. Hence, CO may not be a problem
that has to be solved