Recently, quantum classifiers have been known to be vulnerable to adversarial
attacks, where quantum classifiers are fooled by imperceptible noises to have
misclassification. In this paper, we propose one first theoretical study that
utilizing the added quantum random rotation noise can improve the robustness of
quantum classifiers against adversarial attacks. We connect the definition of
differential privacy and demonstrate the quantum classifier trained with the
natural presence of additive noise is differentially private. Lastly, we derive
a certified robustness bound to enable quantum classifiers to defend against
adversarial examples supported by experimental results.Comment: Submitted to IEEE ICASSP 202