152 research outputs found
Physical Adversarial Attack meets Computer Vision: A Decade Survey
Although Deep Neural Networks (DNNs) have achieved impressive results in
computer vision, their exposed vulnerability to adversarial attacks remains a
serious concern. A series of works has shown that by adding elaborate
perturbations to images, DNNs could have catastrophic degradation in
performance metrics. And this phenomenon does not only exist in the digital
space but also in the physical space. Therefore, estimating the security of
these DNNs-based systems is critical for safely deploying them in the real
world, especially for security-critical applications, e.g., autonomous cars,
video surveillance, and medical diagnosis. In this paper, we focus on physical
adversarial attacks and provide a comprehensive survey of over 150 existing
papers. We first clarify the concept of the physical adversarial attack and
analyze its characteristics. Then, we define the adversarial medium, essential
to perform attacks in the physical world. Next, we present the physical
adversarial attack methods in task order: classification, detection, and
re-identification, and introduce their performance in solving the trilemma:
effectiveness, stealthiness, and robustness. In the end, we discuss the current
challenges and potential future directions.Comment: 32 pages. Under Revie
DVS-Attacks: Adversarial Attacks on Dynamic Vision Sensors for Spiking Neural Networks
Spiking Neural Networks (SNNs), despite being energy-efficient when
implemented on neuromorphic hardware and coupled with event-based Dynamic
Vision Sensors (DVS), are vulnerable to security threats, such as adversarial
attacks, i.e., small perturbations added to the input for inducing a
misclassification. Toward this, we propose DVS-Attacks, a set of stealthy yet
efficient adversarial attack methodologies targeted to perturb the event
sequences that compose the input of the SNNs. First, we show that noise filters
for DVS can be used as defense mechanisms against adversarial attacks.
Afterwards, we implement several attacks and test them in the presence of two
types of noise filters for DVS cameras. The experimental results show that the
filters can only partially defend the SNNs against our proposed DVS-Attacks.
Using the best settings for the noise filters, our proposed Mask Filter-Aware
Dash Attack reduces the accuracy by more than 20% on the DVS-Gesture dataset
and by more than 65% on the MNIST dataset, compared to the original clean
frames. The source code of all the proposed DVS-Attacks and noise filters is
released at https://github.com/albertomarchisio/DVS-Attacks.Comment: Accepted for publication at IJCNN 202
- …