151 research outputs found
Physical Adversarial Attack meets Computer Vision: A Decade Survey
Although Deep Neural Networks (DNNs) have achieved impressive results in
computer vision, their exposed vulnerability to adversarial attacks remains a
serious concern. A series of works has shown that by adding elaborate
perturbations to images, DNNs could have catastrophic degradation in
performance metrics. And this phenomenon does not only exist in the digital
space but also in the physical space. Therefore, estimating the security of
these DNNs-based systems is critical for safely deploying them in the real
world, especially for security-critical applications, e.g., autonomous cars,
video surveillance, and medical diagnosis. In this paper, we focus on physical
adversarial attacks and provide a comprehensive survey of over 150 existing
papers. We first clarify the concept of the physical adversarial attack and
analyze its characteristics. Then, we define the adversarial medium, essential
to perform attacks in the physical world. Next, we present the physical
adversarial attack methods in task order: classification, detection, and
re-identification, and introduce their performance in solving the trilemma:
effectiveness, stealthiness, and robustness. In the end, we discuss the current
challenges and potential future directions.Comment: 32 pages. Under Revie
Adversarial Examples in the Physical World: A Survey
Deep neural networks (DNNs) have demonstrated high vulnerability to
adversarial examples. Besides the attacks in the digital world, the practical
implications of adversarial examples in the physical world present significant
challenges and safety concerns. However, current research on physical
adversarial examples (PAEs) lacks a comprehensive understanding of their unique
characteristics, leading to limited significance and understanding. In this
paper, we address this gap by thoroughly examining the characteristics of PAEs
within a practical workflow encompassing training, manufacturing, and
re-sampling processes. By analyzing the links between physical adversarial
attacks, we identify manufacturing and re-sampling as the primary sources of
distinct attributes and particularities in PAEs. Leveraging this knowledge, we
develop a comprehensive analysis and classification framework for PAEs based on
their specific characteristics, covering over 100 studies on physical-world
adversarial examples. Furthermore, we investigate defense strategies against
PAEs and identify open challenges and opportunities for future research. We aim
to provide a fresh, thorough, and systematic understanding of PAEs, thereby
promoting the development of robust adversarial learning and its application in
open-world scenarios.Comment: Adversarial examples, physical-world scenarios, attacks and defense
Towards Generic and Controllable Attacks Against Object Detection
Existing adversarial attacks against Object Detectors (ODs) suffer from two
inherent limitations. Firstly, ODs have complicated meta-structure designs,
hence most advanced attacks for ODs concentrate on attacking specific
detector-intrinsic structures, which makes it hard for them to work on other
detectors and motivates us to design a generic attack against ODs. Secondly,
most works against ODs make Adversarial Examples (AEs) by generalizing
image-level attacks from classification to detection, which brings redundant
computations and perturbations in semantically meaningless areas (e.g.,
backgrounds) and leads to an emergency for seeking controllable attacks for
ODs. To this end, we propose a generic white-box attack, LGP (local
perturbations with adaptively global attacks), to blind mainstream object
detectors with controllable perturbations. For a detector-agnostic attack, LGP
tracks high-quality proposals and optimizes three heterogeneous losses
simultaneously. In this way, we can fool the crucial components of ODs with a
part of their outputs without the limitations of specific structures. Regarding
controllability, we establish an object-wise constraint that exploits
foreground-background separation adaptively to induce the attachment of
perturbations to foregrounds. Experimentally, the proposed LGP successfully
attacked sixteen state-of-the-art object detectors on MS-COCO and DOTA
datasets, with promising imperceptibility and transferability obtained. Codes
are publicly released in https://github.com/liguopeng0923/LGP.gi
Physical Adversarial Attacks for Surveillance: A Survey
Modern automated surveillance techniques are heavily reliant on deep learning
methods. Despite the superior performance, these learning systems are
inherently vulnerable to adversarial attacks - maliciously crafted inputs that
are designed to mislead, or trick, models into making incorrect predictions. An
adversary can physically change their appearance by wearing adversarial
t-shirts, glasses, or hats or by specific behavior, to potentially avoid
various forms of detection, tracking and recognition of surveillance systems;
and obtain unauthorized access to secure properties and assets. This poses a
severe threat to the security and safety of modern surveillance systems. This
paper reviews recent attempts and findings in learning and designing physical
adversarial attacks for surveillance applications. In particular, we propose a
framework to analyze physical adversarial attacks and provide a comprehensive
survey of physical adversarial attacks on four key surveillance tasks:
detection, identification, tracking, and action recognition under this
framework. Furthermore, we review and analyze strategies to defend against the
physical adversarial attacks and the methods for evaluating the strengths of
the defense. The insights in this paper present an important step in building
resilience within surveillance systems to physical adversarial attacks
TransCAB: Transferable Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World
Object detection is the foundation of various critical computer-vision tasks
such as segmentation, object tracking, and event detection. To train an object
detector with satisfactory accuracy, a large amount of data is required.
However, due to the intensive workforce involved with annotating large
datasets, such a data curation task is often outsourced to a third party or
relied on volunteers. This work reveals severe vulnerabilities of such data
curation pipeline. We propose MACAB that crafts clean-annotated images to
stealthily implant the backdoor into the object detectors trained on them even
when the data curator can manually audit the images. We observe that the
backdoor effect of both misclassification and the cloaking are robustly
achieved in the wild when the backdoor is activated with inconspicuously
natural physical triggers. Backdooring non-classification object detection with
clean-annotation is challenging compared to backdooring existing image
classification tasks with clean-label, owing to the complexity of having
multiple objects within each frame, including victim and non-victim objects.
The efficacy of the MACAB is ensured by constructively i abusing the
image-scaling function used by the deep learning framework, ii incorporating
the proposed adversarial clean image replica technique, and iii combining
poison data selection criteria given constrained attacking budget. Extensive
experiments demonstrate that MACAB exhibits more than 90% attack success rate
under various real-world scenes. This includes both cloaking and
misclassification backdoor effect even restricted with a small attack budget.
The poisoned samples cannot be effectively identified by state-of-the-art
detection techniques.The comprehensive video demo is at
https://youtu.be/MA7L_LpXkp4, which is based on a poison rate of 0.14% for
YOLOv4 cloaking backdoor and Faster R-CNN misclassification backdoor
Visually Adversarial Attacks and Defenses in the Physical World: A Survey
Although Deep Neural Networks (DNNs) have been widely applied in various
real-world scenarios, they are vulnerable to adversarial examples. The current
adversarial attacks in computer vision can be divided into digital attacks and
physical attacks according to their different attack forms. Compared with
digital attacks, which generate perturbations in the digital pixels, physical
attacks are more practical in the real world. Owing to the serious security
problem caused by physically adversarial examples, many works have been
proposed to evaluate the physically adversarial robustness of DNNs in the past
years. In this paper, we summarize a survey versus the current physically
adversarial attacks and physically adversarial defenses in computer vision. To
establish a taxonomy, we organize the current physical attacks from attack
tasks, attack forms, and attack methods, respectively. Thus, readers can have a
systematic knowledge of this topic from different aspects. For the physical
defenses, we establish the taxonomy from pre-processing, in-processing, and
post-processing for the DNN models to achieve full coverage of the adversarial
defenses. Based on the above survey, we finally discuss the challenges of this
research field and further outlook on the future direction
- …