39,102 research outputs found
Adversarially Robust Distillation
Knowledge distillation is effective for producing small, high-performance
neural networks for classification, but these small networks are vulnerable to
adversarial attacks. This paper studies how adversarial robustness transfers
from teacher to student during knowledge distillation. We find that a large
amount of robustness may be inherited by the student even when distilled on
only clean images. Second, we introduce Adversarially Robust Distillation (ARD)
for distilling robustness onto student networks. In addition to producing small
models with high test accuracy like conventional distillation, ARD also passes
the superior robustness of large networks onto the student. In our experiments,
we find that ARD student models decisively outperform adversarially trained
networks of identical architecture in terms of robust accuracy, surpassing
state-of-the-art methods on standard robustness benchmarks. Finally, we adapt
recent fast adversarial training methods to ARD for accelerated robust
distillation.Comment: Accepted to AAAI Conference on Artificial Intelligence, 202
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Spiking Neural Networks (SNNs) claim to present many advantages in terms of
biological plausibility and energy efficiency compared to standard Deep Neural
Networks (DNNs). Recent works have shown that DNNs are vulnerable to
adversarial attacks, i.e., small perturbations added to the input data can lead
to targeted or random misclassifications. In this paper, we aim at
investigating the key research question: ``Are SNNs secure?'' Towards this, we
perform a comparative study of the security vulnerabilities in SNNs and DNNs
w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack
methodology, i.e., without the knowledge of the internal structure of the SNN,
which employs a greedy heuristic to automatically generate imperceptible and
robust adversarial examples (i.e., attack images) for the given SNN. We perform
an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN
having the same number of layers and neurons (to obtain a fair comparison), in
order to study the efficiency of our methodology and to understand the
differences between SNNs and DNNs w.r.t. the adversarial examples. Our work
opens new avenues of research towards the robustness of the SNNs, considering
their similarities to the human brain's functionality.Comment: Accepted for publication at the 2020 International Joint Conference
on Neural Networks (IJCNN
- …