Deep Neural Networks are being extensively used in communication systems and
Automatic Modulation Classification (AMC) in particular. However, they are very
susceptible to small adversarial perturbations that are carefully crafted to
change the network decision. In this work, we build on knowledge distillation
ideas and adversarial training in order to build more robust AMC systems. We
first outline the importance of the quality of the training data in terms of
accuracy and robustness of the model. We then propose to use the Maximum
Likelihood function, which could solve the AMC problem in offline settings, to
generate better training labels. Those labels teach the model to be uncertain
in challenging conditions, which permits to increase the accuracy, as well as
the robustness of the model when combined with adversarial training.
Interestingly, we observe that this increase in performance transfers to online
settings, where the Maximum Likelihood function cannot be used in practice.
Overall, this work highlights the potential of learning to be uncertain in
difficult scenarios, compared to directly removing label noise