6,383 research outputs found
A Comprehensive Study and Comparison of the Robustness of 3D Object Detectors Against Adversarial Attacks
Recent years have witnessed significant advancements in deep learning-based
3D object detection, leading to its widespread adoption in numerous
applications. As 3D object detectors become increasingly crucial for
security-critical tasks, it is imperative to understand their robustness
against adversarial attacks. This paper presents the first comprehensive
evaluation and analysis of the robustness of LiDAR-based 3D detectors under
adversarial attacks. Specifically, we extend three distinct adversarial attacks
to the 3D object detection task, benchmarking the robustness of
state-of-the-art LiDAR-based 3D object detectors against attacks on the KITTI
and Waymo datasets. We further analyze the relationship between robustness and
detector properties. Additionally, we explore the transferability of
cross-model, cross-task, and cross-data attacks. Thorough experiments on
defensive strategies for 3D detectors are conducted, demonstrating that simple
transformations like flipping provide little help in improving robustness when
the applied transformation strategy is exposed to attackers. Finally, we
propose balanced adversarial focal training, based on conventional adversarial
training, to strike a balance between accuracy and robustness. Our findings
will facilitate investigations into understanding and defending against
adversarial attacks on LiDAR-based 3D object detectors, thus advancing the
field. The source code is publicly available at
\url{https://github.com/Eaphan/Robust3DOD}.Comment: 30 pages, 14 figure
Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network Models
Causal Neural Network models have shown high levels of robustness to
adversarial attacks as well as an increased capacity for generalisation tasks
such as few-shot learning and rare-context classification compared to
traditional Neural Networks. This robustness is argued to stem from the
disentanglement of causal and confounder input signals. However, no
quantitative study has yet measured the level of disentanglement achieved by
these types of causal models or assessed how this relates to their adversarial
robustness.
Existing causal disentanglement metrics are not applicable to deterministic
models trained on real-world datasets. We, therefore, utilise metrics of
content/style disentanglement from the field of Computer Vision to measure
different aspects of the causal disentanglement for four state-of-the-art
causal Neural Network models. By re-implementing these models with a common
ResNet18 architecture we are able to fairly measure their adversarial
robustness on three standard image classification benchmarking datasets under
seven common white-box attacks. We find a strong association (r=0.820, p=0.001)
between the degree to which models decorrelate causal and confounder signals
and their adversarial robustness. Additionally, we find a moderate negative
association between the pixel-level information content of the confounder
signal and adversarial robustness (r=-0.597, p=0.040).Comment: 12 pages, 3 figure
Benchmarking Adversarial Robustness of Compressed Deep Learning Models
The increasing size of Deep Neural Networks (DNNs) poses a pressing need for
model compression, particularly when employed on resource constrained devices.
Concurrently, the susceptibility of DNNs to adversarial attacks presents
another significant hurdle. Despite substantial research on both model
compression and adversarial robustness, their joint examination remains
underexplored. Our study bridges this gap, seeking to understand the effect of
adversarial inputs crafted for base models on their pruned versions. To examine
this relationship, we have developed a comprehensive benchmark across diverse
adversarial attacks and popular DNN models. We uniquely focus on models not
previously exposed to adversarial training and apply pruning schemes optimized
for accuracy and performance. Our findings reveal that while the benefits of
pruning enhanced generalizability, compression, and faster inference times are
preserved, adversarial robustness remains comparable to the base model. This
suggests that model compression while offering its unique advantages, does not
undermine adversarial robustness
Benchmarking the Physical-world Adversarial Robustness of Vehicle Detection
Adversarial attacks in the physical world can harm the robustness of
detection models. Evaluating the robustness of detection models in the physical
world can be challenging due to the time-consuming and labor-intensive nature
of many experiments. Thus, virtual simulation experiments can provide a
solution to this challenge. However, there is no unified detection benchmark
based on virtual simulation environment. To address this challenge, we proposed
an instant-level data generation pipeline based on the CARLA simulator. Using
this pipeline, we generated the DCI dataset and conducted extensive experiments
on three detection models and three physical adversarial attacks. The dataset
covers 7 continuous and 1 discrete scenes, with over 40 angles, 20 distances,
and 20,000 positions. The results indicate that Yolo v6 had strongest
resistance, with only a 6.59% average AP drop, and ASA was the most effective
attack algorithm with a 14.51% average AP reduction, twice that of other
algorithms. Static scenes had higher recognition AP, and results under
different weather conditions were similar. Adversarial attack algorithm
improvement may be approaching its 'limitation'.Comment: CVPR 2023 worksho
OODRobustBench: benchmarking and analyzing adversarial robustness under distribution shift
Existing works have made great progress in improving adversarial robustness,
but typically test their method only on data from the same distribution as the
training data, i.e. in-distribution (ID) testing. As a result, it is unclear
how such robustness generalizes under input distribution shifts, i.e.
out-of-distribution (OOD) testing. This is a concerning omission as such
distribution shifts are unavoidable when methods are deployed in the wild. To
address this issue we propose a benchmark named OODRobustBench to
comprehensively assess OOD adversarial robustness using 23 dataset-wise shifts
(i.e. naturalistic shifts in input distribution) and 6 threat-wise shifts
(i.e., unforeseen adversarial threat models). OODRobustBench is used to assess
706 robust models using 60.7K adversarial evaluations. This large-scale
analysis shows that: 1) adversarial robustness suffers from a severe OOD
generalization issue; 2) ID robustness correlates strongly with OOD robustness,
in a positive linear way, under many distribution shifts. The latter enables
the prediction of OOD robustness from ID robustness. Based on this, we are able
to predict the upper limit of OOD robustness for existing robust training
schemes. The results suggest that achieving OOD robustness requires designing
novel methods beyond the conventional ones. Last, we discover that extra data,
data augmentation, advanced model architectures and particular regularization
approaches can improve OOD robustness. Noticeably, the discovered training
schemes, compared to the baseline, exhibit dramatically higher robustness under
threat shift while keeping high ID robustness, demonstrating new promising
solutions for robustness against both multi-attack and unforeseen attacks.Comment: in submissio
- …