13 research outputs found
Adversarial Color Projection: A Projector-Based Physical Attack to DNNs
Recent advances have shown that deep neural networks (DNNs) are susceptible
to adversarial perturbations. Therefore, it is necessary to evaluate the
robustness of advanced DNNs using adversarial attacks. However, traditional
physical attacks that use stickers as perturbations are more vulnerable than
recent light-based physical attacks. In this work, we propose a projector-based
physical attack called adversarial color projection (AdvCP), which performs an
adversarial attack by manipulating the physical parameters of the projected
light. Experiments show the effectiveness of our method in both digital and
physical environments. The experimental results demonstrate that the proposed
method has excellent attack transferability, which endows AdvCP with effective
blackbox attack. We prospect AdvCP threats to future vision-based systems and
applications and propose some ideas for light-based physical attacks.Comment: arXiv admin note: substantial text overlap with arXiv:2209.0243
Recommended from our members
Exploiting Human Perception for Adversarial Attacks
There has been a significant amount of recent work towards fooling deep-learning-based classifiers, particularly for images, via adversarial inputs that are perceptually similar to benign examples. However, researchers typically use minimization of the -norm as a proxy for imperceptibility, an approach that oversimplifies the complexity of real-world images and human visual perception. We exploit the relationship between image features and human perception to propose a \textit{Perceptual Loss (PL)} metric to better capture human imperceptibly during the generation of adversarial images. By focusing on human perceptible distortion of image features, the metric yields better visual quality adversarial images as our experiments validate. Our results also demonstrate the effectiveness and efficiency of our algorithm
Fooling Thermal Infrared Detectors in Physical World
Infrared imaging systems have a vast array of potential applications in
pedestrian detection and autonomous driving, and their safety performance is of
great concern. However, few studies have explored the safety of infrared
imaging systems in real-world settings. Previous research has used physical
perturbations such as small bulbs and thermal "QR codes" to attack infrared
imaging detectors, but such methods are highly visible and lack stealthiness.
Other researchers have used hot and cold blocks to deceive infrared imaging
detectors, but this method is limited in its ability to execute attacks from
various angles. To address these shortcomings, we propose a novel physical
attack called adversarial infrared blocks (AdvIB). By optimizing the physical
parameters of the adversarial infrared blocks, this method can execute a
stealthy black-box attack on thermal imaging system from various angles. We
evaluate the proposed method based on its effectiveness, stealthiness, and
robustness. Our physical tests show that the proposed method achieves a success
rate of over 80% under most distance and angle conditions, validating its
effectiveness. For stealthiness, our method involves attaching the adversarial
infrared block to the inside of clothing, enhancing its stealthiness.
Additionally, we test the proposed method on advanced detectors, and
experimental results demonstrate an average attack success rate of 51.2%,
proving its robustness. Overall, our proposed AdvIB method offers a promising
avenue for conducting stealthy, effective and robust black-box attacks on
thermal imaging system, with potential implications for real-world safety and
security applications
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start
E-commerce platforms provide their customers with ranked lists of recommended
items matching the customers' preferences. Merchants on e-commerce platforms
would like their items to appear as high as possible in the top-N of these
ranked lists. In this paper, we demonstrate how unscrupulous merchants can
create item images that artificially promote their products, improving their
rankings. Recommender systems that use images to address the cold start problem
are vulnerable to this security risk. We describe a new type of attack,
Adversarial Item Promotion (AIP), that strikes directly at the core of Top-N
recommenders: the ranking mechanism itself. Existing work on adversarial images
in recommender systems investigates the implications of conventional attacks,
which target deep learning classifiers. In contrast, our AIP attacks are
embedding attacks that seek to push features representations in a way that
fools the ranker (not a classifier) and directly lead to item promotion. We
introduce three AIP attacks insider attack, expert attack, and semantic attack,
which are defined with respect to three successively more realistic attack
models. Our experiments evaluate the danger of these attacks when mounted
against three representative visually-aware recommender algorithms in a
framework that uses images to address cold start. We also evaluate two common
defenses against adversarial images in the classification scenario and show
that these simple defenses do not eliminate the danger of AIP attacks. In sum,
we show that using images to address cold start opens recommender systems to
potential threats with clear practical implications. To facilitate future
research, we release an implementation of our attacks and defenses, which
allows reproduction and extension.Comment: Our code is available at https://github.com/liuzrcc/AI
Biologically Inspired Mechanisms for Adversarial Robustness
A convolutional neural network strongly robust to adversarial perturbations
at reasonable computational and performance cost has not yet been demonstrated.
The primate visual ventral stream seems to be robust to small perturbations in
visual stimuli but the underlying mechanisms that give rise to this robust
perception are not understood. In this work, we investigate the role of two
biologically plausible mechanisms in adversarial robustness. We demonstrate
that the non-uniform sampling performed by the primate retina and the presence
of multiple receptive fields with a range of receptive field sizes at each
eccentricity improve the robustness of neural networks to small adversarial
perturbations. We verify that these two mechanisms do not suffer from gradient
obfuscation and study their contribution to adversarial robustness through
ablation studies.Comment: 25 pages, 15 figure