11 research outputs found
Adversarial Attack on Radar-based Environment Perception Systems
Due to their robustness to degraded capturing conditions, radars are widely
used for environment perception, which is a critical task in applications like
autonomous vehicles. More specifically, Ultra-Wide Band (UWB) radars are
particularly efficient for short range settings as they carry rich information
on the environment. Recent UWB-based systems rely on Machine Learning (ML) to
exploit the rich signature of these sensors. However, ML classifiers are
susceptible to adversarial examples, which are created from raw data to fool
the classifier such that it assigns the input to the wrong class. These attacks
represent a serious threat to systems integrity, especially for safety-critical
applications. In this work, we present a new adversarial attack on UWB radars
in which an adversary injects adversarial radio noise in the wireless channel
to cause an obstacle recognition failure. First, based on signals collected in
real-life environment, we show that conventional attacks fail to generate
robust noise under realistic conditions. We propose a-RNA, i.e., Adversarial
Radio Noise Attack to overcome these issues. Specifically, a-RNA generates an
adversarial noise that is efficient without synchronization between the input
signal and the noise. Moreover, a-RNA generated noise is, by-design, robust
against pre-processing countermeasures such as filtering-based defenses.
Moreover, in addition to the undetectability objective by limiting the noise
magnitude budget, a-RNA is also efficient in the presence of sophisticated
defenses in the spectral domain by introducing a frequency budget. We believe
this work should alert about potentially critical implementations of
adversarial attacks on radar systems that should be taken seriously
SAAM: Stealthy Adversarial Attack on Monocular Depth Estimation
In this paper, we investigate the vulnerability of MDE to adversarial
patches. We propose a novel \underline{S}tealthy \underline{A}dversarial
\underline{A}ttacks on \underline{M}DE (SAAM) that compromises MDE by either
corrupting the estimated distance or causing an object to seamlessly blend into
its surroundings. Our experiments, demonstrate that the designed stealthy patch
successfully causes a DNN-based MDE to misestimate the depth of objects. In
fact, our proposed adversarial patch achieves a significant 60\% depth error
with 99\% ratio of the affected region. Importantly, despite its adversarial
nature, the patch maintains a naturalistic appearance, making it inconspicuous
to human observers. We believe that this work sheds light on the threat of
adversarial attacks in the context of MDE on edge devices. We hope it raises
awareness within the community about the potential real-life harm of such
attacks and encourages further research into developing more robust and
adaptive defense mechanisms
Defensive Approximation: Securing CNNs using Approximate Computing
In the past few years, an increasing number of machine-learning and deep
learning structures, such as Convolutional Neural Networks (CNNs), have been
applied to solving a wide range of real-life problems. However, these
architectures are vulnerable to adversarial attacks. In this paper, we propose
for the first time to use hardware-supported approximate computing to improve
the robustness of machine learning classifiers. We show that our approximate
computing implementation achieves robustness across a wide range of attack
scenarios. Specifically, for black-box and grey-box attack scenarios, we show
that successful adversarial attacks against the exact classifier have poor
transferability to the approximate implementation. Surprisingly, the robustness
advantages also apply to white-box attacks where the attacker has access to the
internal implementation of the approximate classifier. We explain some of the
possible reasons for this robustness through analysis of the internal operation
of the approximate implementation. Furthermore, our approximate computing model
maintains the same level in terms of classification accuracy, does not require
retraining, and reduces resource utilization and energy consumption of the CNN.
We conducted extensive experiments on a set of strong adversarial attacks; We
empirically show that the proposed implementation increases the robustness of a
LeNet-5 and an Alexnet CNNs by up to 99% and 87%, respectively for strong
grey-box adversarial attacks along with up to 67% saving in energy consumption
due to the simpler nature of the approximate logic. We also show that a
white-box attack requires a remarkably higher noise budget to fool the
approximate classifier, causing an average of 4db degradation of the PSNR of
the input image relative to the images that succeed in fooling the exact
classifierComment: ACM International Conference on Architectural Support for Programming
Languages and Operating Systems (ASPLOS 2021
Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks
Machine-learning architectures, such as Convolutional Neural Networks (CNNs)
are vulnerable to adversarial attacks: inputs crafted carefully to force the
system output to a wrong label. Since machine-learning is being deployed in
safety-critical and security-sensitive domains, such attacks may have
catastrophic security and safety consequences. In this paper, we propose for
the first time to use hardware-supported approximate computing to improve the
robustness of machine-learning classifiers. We show that successful adversarial
attacks against the exact classifier have poor transferability to the
approximate implementation. Surprisingly, the robustness advantages also apply
to white-box attacks where the attacker has unrestricted access to the
approximate classifier implementation: in this case, we show that substantially
higher levels of adversarial noise are needed to produce adversarial examples.
Furthermore, our approximate computing model maintains the same level in terms
of classification accuracy, does not require retraining, and reduces resource
utilization and energy consumption of the CNN. We conducted extensive
experiments on a set of strong adversarial attacks; We empirically show that
the proposed implementation increases the robustness of a LeNet-5, Alexnet and
VGG-11 CNNs considerably with up to 50% by-product saving in energy consumption
due to the simpler nature of the approximate logic.Comment: arXiv admin note: substantial text overlap with arXiv:2006.0770
AaN: Anti-adversarial Noise - A Novel Approach for Securing Machine Learning-based Wireless Communication Systems
Machine Learning (ML) is becoming a cornerstone enabling technology for the next generation of wireless systems. This is mainly due to the high performance achieved by these data-driven models in addressing problems in communication that are challenging to solve using the classical methods. However, ML models are known to be vulnerable to adversarial attacks; maliciously crafted lowmagnitude signals that are designed to mislead ML models. More specifically, the propagation nature of the electromagnetic signals makes the wireless domain even more critical compared to other applications like computer vision where the attacker is physically constrained by the victim’s immediate neighborhood to be efficient. While several works showed the practicality of these attacks in the wireless domain, the main countermeasure is adversarial training. However, this approach results in a considerable accuracy loss, which makes the very utility of ML questionable. In this paper, we address this problem with a new approach tailored to wireless communication contexts. Specifically, we propose a new defense that leverages the physical properties of the wireless propagation to enhance ML-based wireless communication systems against adversarial attacks. We propose Anti-adversarial Noise (AaN), where the Base Station (BS) broadcasts a carefully crafted defensive signal that is designed to counter the impact of any adversarial noise. We specifically focus on ML-based modulation recognition. However, the proposed method is not specific to this application and can be generalized to other ML-based communication use cases. Our results show that our proposed defense can enhance models’ robustness by up to 44% without losing utility.</p