60 research outputs found
Adversarial Neon Beam: Robust Physical-World Adversarial Attack to DNNs
In the physical world, light affects the performance of deep neural networks.
Nowadays, many products based on deep neural network have been put into daily
life. There are few researches on the effect of light on the performance of
deep neural network models. However, the adversarial perturbations generated by
light may have extremely dangerous effects on these systems. In this work, we
propose an attack method called adversarial neon beam (AdvNB), which can
execute the physical attack by obtaining the physical parameters of adversarial
neon beams with very few queries. Experiments show that our algorithm can
achieve advanced attack effect in both digital test and physical test. In the
digital environment, 99.3% attack success rate was achieved, and in the
physical environment, 100% attack success rate was achieved. Compared with the
most advanced physical attack methods, our method can achieve better physical
perturbation concealment. In addition, by analyzing the experimental data, we
reveal some new phenomena brought about by the adversarial neon beam attack
Adversarial Defense via Neural Oscillation inspired Gradient Masking
Spiking neural networks (SNNs) attract great attention due to their low power
consumption, low latency, and biological plausibility. As they are widely
deployed in neuromorphic devices for low-power brain-inspired computing,
security issues become increasingly important. However, compared to deep neural
networks (DNNs), SNNs currently lack specifically designed defense methods
against adversarial attacks. Inspired by neural membrane potential oscillation,
we propose a novel neural model that incorporates the bio-inspired oscillation
mechanism to enhance the security of SNNs. Our experiments show that SNNs with
neural oscillation neurons have better resistance to adversarial attacks than
ordinary SNNs with LIF neurons on kinds of architectures and datasets.
Furthermore, we propose a defense method that changes model's gradients by
replacing the form of oscillation, which hides the original training gradients
and confuses the attacker into using gradients of 'fake' neurons to generate
invalid adversarial samples. Our experiments suggest that the proposed defense
method can effectively resist both single-step and iterative attacks with
comparable defense effectiveness and much less computational costs than
adversarial training methods on DNNs. To the best of our knowledge, this is the
first work that establishes adversarial defense through masking surrogate
gradients on SNNs
- …