4,433 research outputs found
Detecting the Unexpected via Image Resynthesis
Classical semantic segmentation methods, including the recent deep learning
ones, assume that all classes observed at test time have been seen during
training. In this paper, we tackle the more realistic scenario where unexpected
objects of unknown classes can appear at test time. The main trends in this
area either leverage the notion of prediction uncertainty to flag the regions
with low confidence as unknown, or rely on autoencoders and highlight
poorly-decoded regions. Having observed that, in both cases, the detected
regions typically do not correspond to unexpected objects, in this paper, we
introduce a drastically different strategy: It relies on the intuition that the
network will produce spurious labels in regions depicting unexpected objects.
Therefore, resynthesizing the image from the resulting semantic map will yield
significant appearance differences with respect to the input image. In other
words, we translate the problem of detecting unknown classes to one of
identifying poorly-resynthesized image regions. We show that this outperforms
both uncertainty- and autoencoder-based methods
Artificial intelligence in the cyber domain: Offense and defense
Artificial intelligence techniques have grown rapidly in recent years, and their applications in practice can be seen in many fields, ranging from facial recognition to image analysis. In the cybersecurity domain, AI-based techniques can provide better cyber defense tools and help adversaries improve methods of attack. However, malicious actors are aware of the new prospects too and will probably attempt to use them for nefarious purposes. This survey paper aims at providing an overview of how artificial intelligence can be used in the context of cybersecurity in both offense and defense.Web of Science123art. no. 41
Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses
It has been shown that adversaries can craft example inputs to neural
networks which are similar to legitimate inputs but have been created to
purposely cause the neural network to misclassify the input. These adversarial
examples are crafted, for example, by calculating gradients of a carefully
defined loss function with respect to the input. As a countermeasure, some
researchers have tried to design robust models by blocking or obfuscating
gradients, even in white-box settings. Another line of research proposes
introducing a separate detector to attempt to detect adversarial examples. This
approach also makes use of gradient obfuscation techniques, for example, to
prevent the adversary from trying to fool the detector. In this paper, we
introduce stochastic substitute training, a gray-box approach that can craft
adversarial examples for defenses which obfuscate gradients. For those defenses
that have tried to make models more robust, with our technique, an adversary
can craft adversarial examples with no knowledge of the defense. For defenses
that attempt to detect the adversarial examples, with our technique, an
adversary only needs very limited information about the defense to craft
adversarial examples. We demonstrate our technique by applying it against two
defenses which make models more robust and two defenses which detect
adversarial examples.Comment: Accepted by AISec '18: 11th ACM Workshop on Artificial Intelligence
and Security. Source code at https://github.com/S-Mohammad-Hashemi/SS
- …