196 research outputs found
Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems
Kalman Filter (KF) is widely used in various domains to perform sequential
learning or variable estimation. In the context of autonomous vehicles, KF
constitutes the core component of many Advanced Driver Assistance Systems
(ADAS), such as Forward Collision Warning (FCW). It tracks the states
(distance, velocity etc.) of relevant traffic objects based on sensor
measurements. The tracking output of KF is often fed into downstream logic to
produce alerts, which will then be used by human drivers to make driving
decisions in near-collision scenarios. In this paper, we study adversarial
attacks on KF as part of the more complex machine-human hybrid system of
Forward Collision Warning. Our attack goal is to negatively affect human
braking decisions by causing KF to output incorrect state estimations that lead
to false or delayed alerts. We accomplish this by sequentially manipulating
measure ments fed into the KF, and propose a novel Model Predictive Control
(MPC) approach to compute the optimal manipulation. Via experiments conducted
in a simulated driving environment, we show that the attacker is able to
successfully change FCW alert signals through planned manipulation over
measurements prior to the desired target time. These results demonstrate that
our attack can stealthily mislead a distracted human driver and cause vehicle
collisions.Comment: Accepted by AAAI2
Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems
Mis- and disinformation are a substantial global threat to our security and
safety. To cope with the scale of online misinformation, researchers have been
working on automating fact-checking by retrieving and verifying against
relevant evidence. However, despite many advances, a comprehensive evaluation
of the possible attack vectors against such systems is still lacking.
Particularly, the automated fact-verification process might be vulnerable to
the exact disinformation campaigns it is trying to combat. In this work, we
assume an adversary that automatically tampers with the online evidence in
order to disrupt the fact-checking model via camouflaging the relevant evidence
or planting a misleading one. We first propose an exploratory taxonomy that
spans these two targets and the different threat model dimensions. Guided by
this, we design and propose several potential attack methods. We show that it
is possible to subtly modify claim-salient snippets in the evidence and
generate diverse and claim-aligned evidence. Thus, we highly degrade the
fact-checking performance under many different permutations of the taxonomy's
dimensions. The attacks are also robust against post-hoc modifications of the
claim. Our analysis further hints at potential limitations in models' inference
when faced with contradicting evidence. We emphasize that these attacks can
have harmful implications on the inspectable and human-in-the-loop usage
scenarios of such models, and conclude by discussing challenges and directions
for future defenses
Adversarial robustness of deep learning enabled industry 4.0 prognostics
The advent of Industry 4.0 in automation and data exchange leads us toward a constant evolution in smart manufacturing environments, including extensive utilization of Internet-of-Things (IoT) and Deep Learning (DL). Specifically, the state-of-the-art Prognostics and Health Management (PHM) has shown great success in achieving a competitive edge in Industry 4.0 by reducing maintenance cost, downtime, and increasing productivity by making data-driven informed decisions. These state-of-the-art PHM systems employ IoT device data and DL algorithms to make informed decisions/predictions of Remaining Useful Life (RUL). Unfortunately, IoT sensors and DL algorithms, both are prone to cyber-attacks. For instance, deep learning algorithms are known for their susceptibility to adversarial examples. Such adversarial attacks have been extensively studied in the computer vision domain. However, it is surprising that their impact on the PHM domain is yet not explored. Thus, modern data-driven intelligent PHM systems pose a significant threat to safety- and cost-critical applications. Towards this, in this thesis, we propose a methodology to design adversarially robust PHM systems by analyzing the effect of different types of adversarial attacks on several DL enabled PHM models. More specifically, we craft adversarial attacks using Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM) and evaluate their impact on Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Neural Network (CNN), Bi-directional LSTM, and Multi-layer perceptron (MLP) based PHM models using the proposed methodology. The obtained results using NASA's turbofan engine, and a well-known battery PHM dataset show that these systems are vulnerable to adversarial attacks and can cause a serious defect in the RUL prediction. We also analyze the impact of adversarial training using the proposed methodology to enhance the adversarial robustness of the PHM systems. The obtained results show that adversarial training is successful in significantly improvising the robustness of these PHM models.Includes bibliographical references (pages 80-98)
- …