82 research outputs found
Robust Backdoor Attacks on Object Detection in Real World
Deep learning models are widely deployed in many applications, such as object
detection in various security fields. However, these models are vulnerable to
backdoor attacks. Most backdoor attacks were intensively studied on classified
models, but little on object detection. Previous works mainly focused on the
backdoor attack in the digital world, but neglect the real world. Especially,
the backdoor attack's effect in the real world will be easily influenced by
physical factors like distance and illumination. In this paper, we proposed a
variable-size backdoor trigger to adapt to the different sizes of attacked
objects, overcoming the disturbance caused by the distance between the viewing
point and attacked object. In addition, we proposed a backdoor training named
malicious adversarial training, enabling the backdoor object detector to learn
the feature of the trigger with physical noise. The experiment results show
this robust backdoor attack (RBA) could enhance the attack success rate in the
real world.Comment: 22 pages, 13figure
Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning
This paper critically assesses the adequacy and representativeness of
physical domain testing for various adversarial machine learning (ML) attacks
against computer vision systems involving human subjects. Many papers that
deploy such attacks characterize themselves as "real world." Despite this
framing, however, we found the physical or real-world testing conducted was
minimal, provided few details about testing subjects and was often conducted as
an afterthought or demonstration. Adversarial ML research without
representative trials or testing is an ethical, scientific, and health/safety
issue that can cause real harms. We introduce the problem and our methodology,
and then critique the physical domain testing methodologies employed by papers
in the field. We then explore various barriers to more inclusive physical
testing in adversarial ML and offer recommendations to improve such testing
notwithstanding these challenges.Comment: Accepted to NeurIPS 2020 Workshop on Dataset Curation and Security;
Also accepted at Navigating the Broader Impacts of AI Research Workshop. All
authors contributed equally. The list of authors is arranged alphabeticall
Physical Adversarial Attack meets Computer Vision: A Decade Survey
Although Deep Neural Networks (DNNs) have achieved impressive results in
computer vision, their exposed vulnerability to adversarial attacks remains a
serious concern. A series of works has shown that by adding elaborate
perturbations to images, DNNs could have catastrophic degradation in
performance metrics. And this phenomenon does not only exist in the digital
space but also in the physical space. Therefore, estimating the security of
these DNNs-based systems is critical for safely deploying them in the real
world, especially for security-critical applications, e.g., autonomous cars,
video surveillance, and medical diagnosis. In this paper, we focus on physical
adversarial attacks and provide a comprehensive survey of over 150 existing
papers. We first clarify the concept of the physical adversarial attack and
analyze its characteristics. Then, we define the adversarial medium, essential
to perform attacks in the physical world. Next, we present the physical
adversarial attack methods in task order: classification, detection, and
re-identification, and introduce their performance in solving the trilemma:
effectiveness, stealthiness, and robustness. In the end, we discuss the current
challenges and potential future directions.Comment: 32 pages. Under Revie
Physical Adversarial Attacks for Surveillance: A Survey
Modern automated surveillance techniques are heavily reliant on deep learning
methods. Despite the superior performance, these learning systems are
inherently vulnerable to adversarial attacks - maliciously crafted inputs that
are designed to mislead, or trick, models into making incorrect predictions. An
adversary can physically change their appearance by wearing adversarial
t-shirts, glasses, or hats or by specific behavior, to potentially avoid
various forms of detection, tracking and recognition of surveillance systems;
and obtain unauthorized access to secure properties and assets. This poses a
severe threat to the security and safety of modern surveillance systems. This
paper reviews recent attempts and findings in learning and designing physical
adversarial attacks for surveillance applications. In particular, we propose a
framework to analyze physical adversarial attacks and provide a comprehensive
survey of physical adversarial attacks on four key surveillance tasks:
detection, identification, tracking, and action recognition under this
framework. Furthermore, we review and analyze strategies to defend against the
physical adversarial attacks and the methods for evaluating the strengths of
the defense. The insights in this paper present an important step in building
resilience within surveillance systems to physical adversarial attacks
TransCAB: Transferable Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World
Object detection is the foundation of various critical computer-vision tasks
such as segmentation, object tracking, and event detection. To train an object
detector with satisfactory accuracy, a large amount of data is required.
However, due to the intensive workforce involved with annotating large
datasets, such a data curation task is often outsourced to a third party or
relied on volunteers. This work reveals severe vulnerabilities of such data
curation pipeline. We propose MACAB that crafts clean-annotated images to
stealthily implant the backdoor into the object detectors trained on them even
when the data curator can manually audit the images. We observe that the
backdoor effect of both misclassification and the cloaking are robustly
achieved in the wild when the backdoor is activated with inconspicuously
natural physical triggers. Backdooring non-classification object detection with
clean-annotation is challenging compared to backdooring existing image
classification tasks with clean-label, owing to the complexity of having
multiple objects within each frame, including victim and non-victim objects.
The efficacy of the MACAB is ensured by constructively i abusing the
image-scaling function used by the deep learning framework, ii incorporating
the proposed adversarial clean image replica technique, and iii combining
poison data selection criteria given constrained attacking budget. Extensive
experiments demonstrate that MACAB exhibits more than 90% attack success rate
under various real-world scenes. This includes both cloaking and
misclassification backdoor effect even restricted with a small attack budget.
The poisoned samples cannot be effectively identified by state-of-the-art
detection techniques.The comprehensive video demo is at
https://youtu.be/MA7L_LpXkp4, which is based on a poison rate of 0.14% for
YOLOv4 cloaking backdoor and Faster R-CNN misclassification backdoor
Area is all you need: repeatable elements make stronger adversarial attacks
Over the last decade, deep neural networks have achieved state of the art in
computer vision tasks. These models, however, are susceptible to unusual
inputs, known as adversarial examples, that cause them to misclassify or
otherwise fail to detect objects. Here, we provide evidence that the increasing
success of adversarial attacks is primarily due to increasing their size. We
then demonstrate a method for generating the largest possible adversarial patch
by building a adversarial pattern out of repeatable elements. This approach
achieves a new state of the art in evading detection by YOLOv2 and YOLOv3.
Finally, we present an experiment that fails to replicate the prior success of
several attacks published in this field, and end with some comments on testing
and reproducibility
Physical Adversarial Examples for Multi-Camera Systems
Neural networks build the foundation of several intelligent systems, which,
however, are known to be easily fooled by adversarial examples. Recent advances
made these attacks possible even in air-gapped scenarios, where the autonomous
system observes its surroundings by, e.g., a camera. We extend these ideas in
our research and evaluate the robustness of multi-camera setups against such
physical adversarial examples. This scenario becomes ever more important with
the rise in popularity of autonomous vehicles, which fuse the information of
several cameras for their driving decision. While we find that multi-camera
setups provide some robustness towards past attack methods, we see that this
advantage reduces when optimizing on multiple perspectives at once. We propose
a novel attack method that we call Transcender-MC, where we incorporate online
3D renderings and perspective projections in the training process. Moreover, we
motivate that certain data augmentation techniques can facilitate the
generation of successful adversarial examples even further. Transcender-MC is
11% more effective in successfully attacking multi-camera setups than
state-of-the-art methods. Our findings offer valuable insights regarding the
resilience of object detection in a setup with multiple cameras and motivate
the need of developing adequate defense mechanisms against them
- …