184 research outputs found
Adversarial attacks hidden in plain sight
Convolutional neural networks have been used to achieve a string of successes
during recent years, but their lack of interpretability remains a serious
issue. Adversarial examples are designed to deliberately fool neural networks
into making any desired incorrect classification, potentially with very high
certainty. Several defensive approaches increase robustness against adversarial
attacks, demanding attacks of greater magnitude, which lead to visible
artifacts. By considering human visual perception, we compose a technique that
allows to hide such adversarial attacks in regions of high complexity, such
that they are imperceptible even to an astute observer. We carry out a user
study on classifying adversarially modified images to validate the perceptual
quality of our approach and find significant evidence for its concealment with
regards to human visual perception
Adversarial Example Detection and Classification With Asymmetrical Adversarial Training
The vulnerabilities of deep neural networks against adversarial examples have
become a significant concern for deploying these models in sensitive domains.
Devising a definitive defense against such attacks is proven to be challenging,
and the methods relying on detecting adversarial samples are only valid when
the attacker is oblivious to the detection mechanism. In this paper we first
present an adversarial example detection method that provides performance
guarantee to norm constrained adversaries. The method is based on the idea of
training adversarial robust subspace detectors using asymmetrical adversarial
training (AAT). The novel AAT objective presents a minimax problem similar to
that of GANs; it has the same convergence property, and consequently supports
the learning of class conditional distributions. We first demonstrate that the
minimax problem could be reasonably solved by PGD attack, and then use the
learned class conditional generative models to define generative
detection/classification models that are both robust and more interpretable. We
provide comprehensive evaluations of the above methods, and demonstrate their
competitive performances and compelling properties on adversarial detection and
robust classification problems.Comment: ICLR 202
- …