2,104 research outputs found
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Spiking Neural Networks (SNNs) claim to present many advantages in terms of
biological plausibility and energy efficiency compared to standard Deep Neural
Networks (DNNs). Recent works have shown that DNNs are vulnerable to
adversarial attacks, i.e., small perturbations added to the input data can lead
to targeted or random misclassifications. In this paper, we aim at
investigating the key research question: ``Are SNNs secure?'' Towards this, we
perform a comparative study of the security vulnerabilities in SNNs and DNNs
w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack
methodology, i.e., without the knowledge of the internal structure of the SNN,
which employs a greedy heuristic to automatically generate imperceptible and
robust adversarial examples (i.e., attack images) for the given SNN. We perform
an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN
having the same number of layers and neurons (to obtain a fair comparison), in
order to study the efficiency of our methodology and to understand the
differences between SNNs and DNNs w.r.t. the adversarial examples. Our work
opens new avenues of research towards the robustness of the SNNs, considering
their similarities to the human brain's functionality.Comment: Accepted for publication at the 2020 International Joint Conference
on Neural Networks (IJCNN
Weighted-Sampling Audio Adversarial Example Attack
Recent studies have highlighted audio adversarial examples as a ubiquitous
threat to state-of-the-art automatic speech recognition systems. Thorough
studies on how to effectively generate adversarial examples are essential to
prevent potential attacks. Despite many research on this, the efficiency and
the robustness of existing works are not yet satisfactory. In this paper, we
propose~\textit{weighted-sampling audio adversarial examples}, focusing on the
numbers and the weights of distortion to reinforce the attack. Further, we
apply a denoising method in the loss function to make the adversarial attack
more imperceptible. Experiments show that our method is the first in the field
to generate audio adversarial examples with low noise and high audio robustness
at the minute time-consuming level.Comment: https://aaai.org/Papers/AAAI/2020GB/AAAI-LiuXL.9260.pd
- …