100 research outputs found
Adversarial Zoom Lens: A Novel Physical-World Attack to DNNs
Although deep neural networks (DNNs) are known to be fragile, no one has
studied the effects of zooming-in and zooming-out of images in the physical
world on DNNs performance. In this paper, we demonstrate a novel physical
adversarial attack technique called Adversarial Zoom Lens (AdvZL), which uses a
zoom lens to zoom in and out of pictures of the physical world, fooling DNNs
without changing the characteristics of the target object. The proposed method
is so far the only adversarial attack technique that does not add physical
adversarial perturbation attack DNNs. In a digital environment, we construct a
data set based on AdvZL to verify the antagonism of equal-scale enlarged images
to DNNs. In the physical environment, we manipulate the zoom lens to zoom in
and out of the target object, and generate adversarial samples. The
experimental results demonstrate the effectiveness of AdvZL in both digital and
physical environments. We further analyze the antagonism of the proposed data
set to the improved DNNs. On the other hand, we provide a guideline for defense
against AdvZL by means of adversarial training. Finally, we look into the
threat possibilities of the proposed approach to future autonomous driving and
variant attack ideas similar to the proposed attack
Impact of Colour Variation on Robustness of Deep Neural Networks
Deep neural networks (DNNs) have have shown state-of-the-art performance for
computer vision applications like image classification, segmentation and object
detection. Whereas recent advances have shown their vulnerability to manual
digital perturbations in the input data, namely adversarial attacks. The
accuracy of the networks is significantly affected by the data distribution of
their training dataset. Distortions or perturbations on color space of input
images generates out-of-distribution data, which make networks more likely to
misclassify them. In this work, we propose a color-variation dataset by
distorting their RGB color on a subset of the ImageNet with 27 different
combinations. The aim of our work is to study the impact of color variation on
the performance of DNNs. We perform experiments on several state-of-the-art DNN
architectures on the proposed dataset, and the result shows a significant
correlation between color variation and loss of accuracy. Furthermore, based on
the ResNet50 architecture, we demonstrate some experiments of the performance
of recently proposed robust training techniques and strategies, such as Augmix,
revisit, and free normalizer, on our proposed dataset. Experimental results
indicate that these robust training techniques can improve the robustness of
deep networks to color variation.Comment: arXiv admin note: substantial text overlap with arXiv:2209.0213
Adversarial Color Projection: A Projector-Based Physical Attack to DNNs
Recent advances have shown that deep neural networks (DNNs) are susceptible
to adversarial perturbations. Therefore, it is necessary to evaluate the
robustness of advanced DNNs using adversarial attacks. However, traditional
physical attacks that use stickers as perturbations are more vulnerable than
recent light-based physical attacks. In this work, we propose a projector-based
physical attack called adversarial color projection (AdvCP), which performs an
adversarial attack by manipulating the physical parameters of the projected
light. Experiments show the effectiveness of our method in both digital and
physical environments. The experimental results demonstrate that the proposed
method has excellent attack transferability, which endows AdvCP with effective
blackbox attack. We prospect AdvCP threats to future vision-based systems and
applications and propose some ideas for light-based physical attacks.Comment: arXiv admin note: substantial text overlap with arXiv:2209.0243
Fooling Thermal Infrared Detectors in Physical World
Infrared imaging systems have a vast array of potential applications in
pedestrian detection and autonomous driving, and their safety performance is of
great concern. However, few studies have explored the safety of infrared
imaging systems in real-world settings. Previous research has used physical
perturbations such as small bulbs and thermal "QR codes" to attack infrared
imaging detectors, but such methods are highly visible and lack stealthiness.
Other researchers have used hot and cold blocks to deceive infrared imaging
detectors, but this method is limited in its ability to execute attacks from
various angles. To address these shortcomings, we propose a novel physical
attack called adversarial infrared blocks (AdvIB). By optimizing the physical
parameters of the adversarial infrared blocks, this method can execute a
stealthy black-box attack on thermal imaging system from various angles. We
evaluate the proposed method based on its effectiveness, stealthiness, and
robustness. Our physical tests show that the proposed method achieves a success
rate of over 80% under most distance and angle conditions, validating its
effectiveness. For stealthiness, our method involves attaching the adversarial
infrared block to the inside of clothing, enhancing its stealthiness.
Additionally, we test the proposed method on advanced detectors, and
experimental results demonstrate an average attack success rate of 51.2%,
proving its robustness. Overall, our proposed AdvIB method offers a promising
avenue for conducting stealthy, effective and robust black-box attacks on
thermal imaging system, with potential implications for real-world safety and
security applications
Battle Against Fluctuating Quantum Noise: Compression-Aided Framework to Enable Robust Quantum Neural Network
Recently, we have been witnessing the scale-up of superconducting quantum
computers; however, the noise of quantum bits (qubits) is still an obstacle for
real-world applications to leveraging the power of quantum computing. Although
there exist error mitigation or error-aware designs for quantum applications,
the inherent fluctuation of noise (a.k.a., instability) can easily collapse the
performance of error-aware designs. What's worse, users can even not be aware
of the performance degradation caused by the change in noise. To address both
issues, in this paper we use Quantum Neural Network (QNN) as a vehicle to
present a novel compression-aided framework, namely QuCAD, which will adapt a
trained QNN to fluctuating quantum noise. In addition, with the historical
calibration (noise) data, our framework will build a model repository offline,
which will significantly reduce the optimization time in the online adaption
process. Emulation results on an earthquake detection dataset show that QuCAD
can achieve 14.91% accuracy gain on average in 146 days over a noise-aware
training approach. For the execution on a 7-qubit IBM quantum processor,
IBM-Jakarta, QuCAD can consistently achieve 12.52% accuracy gain on earthquake
detection
QuMoS: A Framework for Preserving Security of Quantum Machine Learning Model
Security has always been a critical issue in machine learning (ML)
applications. Due to the high cost of model training -- such as collecting
relevant samples, labeling data, and consuming computing power --
model-stealing attack is one of the most fundamental but vitally important
issues. When it comes to quantum computing, such a quantum machine learning
(QML) model-stealing attack also exists and is even more severe because the
traditional encryption method, such as homomorphic encryption can hardly be
directly applied to quantum computation. On the other hand, due to the limited
quantum computing resources, the monetary cost of training QML model can be
even higher than classical ones in the near term. Therefore, a well-tuned QML
model developed by a third-party company can be delegated to a quantum cloud
provider as a service to be used by ordinary users. In this case, the QML model
will likely be leaked if the cloud provider is under attack. To address such a
problem, we propose a novel framework, namely QuMoS, to preserve model
security. We propose to divide the complete QML model into multiple parts and
distribute them to multiple physically isolated quantum cloud providers for
execution. As such, even if the adversary in a single provider can obtain a
partial model, it does not have sufficient information to retrieve the complete
model. Although promising, we observed that an arbitrary model design under
distributed settings cannot provide model security. We further developed a
reinforcement learning-based security engine, which can automatically optimize
the model design under the distributed setting, such that a good trade-off
between model performance and security can be made. Experimental results on
four datasets show that the model design proposed by QuMoS can achieve
competitive performance while providing the highest security than the
baselines
- …