2,067 research outputs found
Adversarial attacks hidden in plain sight
Convolutional neural networks have been used to achieve a string of successes
during recent years, but their lack of interpretability remains a serious
issue. Adversarial examples are designed to deliberately fool neural networks
into making any desired incorrect classification, potentially with very high
certainty. Several defensive approaches increase robustness against adversarial
attacks, demanding attacks of greater magnitude, which lead to visible
artifacts. By considering human visual perception, we compose a technique that
allows to hide such adversarial attacks in regions of high complexity, such
that they are imperceptible even to an astute observer. We carry out a user
study on classifying adversarially modified images to validate the perceptual
quality of our approach and find significant evidence for its concealment with
regards to human visual perception
Adversarial Example Detection and Classification With Asymmetrical Adversarial Training
The vulnerabilities of deep neural networks against adversarial examples have
become a significant concern for deploying these models in sensitive domains.
Devising a definitive defense against such attacks is proven to be challenging,
and the methods relying on detecting adversarial samples are only valid when
the attacker is oblivious to the detection mechanism. In this paper we first
present an adversarial example detection method that provides performance
guarantee to norm constrained adversaries. The method is based on the idea of
training adversarial robust subspace detectors using asymmetrical adversarial
training (AAT). The novel AAT objective presents a minimax problem similar to
that of GANs; it has the same convergence property, and consequently supports
the learning of class conditional distributions. We first demonstrate that the
minimax problem could be reasonably solved by PGD attack, and then use the
learned class conditional generative models to define generative
detection/classification models that are both robust and more interpretable. We
provide comprehensive evaluations of the above methods, and demonstrate their
competitive performances and compelling properties on adversarial detection and
robust classification problems.Comment: ICLR 202
Robust Decision Trees Against Adversarial Examples
Although adversarial examples and model robustness have been extensively
studied in the context of linear models and neural networks, research on this
issue in tree-based models and how to make tree-based models robust against
adversarial examples is still limited. In this paper, we show that tree based
models are also vulnerable to adversarial examples and develop a novel
algorithm to learn robust trees. At its core, our method aims to optimize the
performance under the worst-case perturbation of input features, which leads to
a max-min saddle point problem. Incorporating this saddle point objective into
the decision tree building procedure is non-trivial due to the discrete nature
of trees --- a naive approach to finding the best split according to this
saddle point objective will take exponential time. To make our approach
practical and scalable, we propose efficient tree building algorithms by
approximating the inner minimizer in this saddle point problem, and present
efficient implementations for classical information gain based trees as well as
state-of-the-art tree boosting models such as XGBoost. Experimental results on
real world datasets demonstrate that the proposed algorithms can substantially
improve the robustness of tree-based models against adversarial examples
- …