6,174 research outputs found
Towards Effective Multi-Label Recognition Attacks via Knowledge Graph Consistency
Many real-world applications of image recognition require multi-label
learning, whose goal is to find all labels in an image. Thus, robustness of
such systems to adversarial image perturbations is extremely important.
However, despite a large body of recent research on adversarial attacks, the
scope of the existing works is mainly limited to the multi-class setting, where
each image contains a single label. We show that the naive extensions of
multi-class attacks to the multi-label setting lead to violating label
relationships, modeled by a knowledge graph, and can be detected using a
consistency verification scheme. Therefore, we propose a graph-consistent
multi-label attack framework, which searches for small image perturbations that
lead to misclassifying a desired target set while respecting label hierarchies.
By extensive experiments on two datasets and using several multi-label
recognition models, we show that our method generates extremely successful
attacks that, unlike naive multi-label perturbations, can produce model
predictions consistent with the knowledge graph
When Measures are Unreliable: Imperceptible Adversarial Perturbations toward Top- Multi-Label Learning
With the great success of deep neural networks, adversarial learning has
received widespread attention in various studies, ranging from multi-class
learning to multi-label learning. However, existing adversarial attacks toward
multi-label learning only pursue the traditional visual imperceptibility but
ignore the new perceptible problem coming from measures such as Precision@
and mAP@. Specifically, when a well-trained multi-label classifier performs
far below the expectation on some samples, the victim can easily realize that
this performance degeneration stems from attack, rather than the model itself.
Therefore, an ideal multi-labeling adversarial attack should manage to not only
deceive visual perception but also evade monitoring of measures. To this end,
this paper first proposes the concept of measure imperceptibility. Then, a
novel loss function is devised to generate such adversarial perturbations that
could achieve both visual and measure imperceptibility. Furthermore, an
efficient algorithm, which enjoys a convex objective, is established to
optimize this objective. Finally, extensive experiments on large-scale
benchmark datasets, such as PASCAL VOC 2012, MS COCO, and NUS WIDE, demonstrate
the superiority of our proposed method in attacking the top- multi-label
systems.Comment: 22 pages, 7 figures, accepted by ACM MM 202
Semantically Adversarial Learnable Filters
We present an adversarial framework to craft perturbations that mislead classifiers by accounting for the image content and the semantics of the labels. The proposed framework combines a structure loss and a semantic adversarial loss in a multi-task objective function to train a fully convolutional neural network. The structure loss helps generate perturbations whose type and magnitude are defined by a target image processing filter. The semantic adversarial loss considers groups of (semantic) labels to craft perturbations that prevent the filtered image from being classified with a label in the same group. We validate our framework with three different target filters, namely detail enhancement, log transformation and gamma correction filters; and evaluate the adversarially filtered images against three classifiers, ResNet50, ResNet18 and AlexNet, pre-trained on ImageNet. We show that the proposed framework generates filtered images with a high success rate, robustness, and transferability to unseen classifiers. We also discuss objective and subjective evaluations of the adversarial perturbations
- …