15 research outputs found
NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips
Due to their proven efficiency, machine-learning systems are deployed in a
wide range of complex real-life problems. More specifically, Spiking Neural
Networks (SNNs) emerged as a promising solution to the accuracy,
resource-utilization, and energy-efficiency challenges in machine-learning
systems. While these systems are going mainstream, they have inherent security
and reliability issues. In this paper, we propose NeuroAttack, a cross-layer
attack that threatens the SNNs integrity by exploiting low-level reliability
issues through a high-level attack. Particularly, we trigger a fault-injection
based sneaky hardware backdoor through a carefully crafted adversarial input
noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious
integrity threat to state-of-the art machine-learning techniques.Comment: Accepted for publication at the 2020 International Joint Conference
on Neural Networks (IJCNN