6 research outputs found
Adversarial Driving: Attacking End-to-End Autonomous Driving Systems
As the research in deep neural networks advances, deep convolutional networks
become feasible for automated driving tasks. There is an emerging trend of
employing end-to-end models in the automation of driving tasks. However,
previous research unveils that deep neural networks are vulnerable to
adversarial attacks in classification tasks. While for regression tasks such as
autonomous driving, the effect of these attacks remains rarely explored. In
this research, we devise two white-box targeted attacks against end-to-end
autonomous driving systems. The driving model takes an image as input and
outputs the steering angle. Our attacks can manipulate the behaviour of the
autonomous driving system only by perturbing the input image. Both attacks can
be initiated in real-time on CPUs without employing GPUs. This demo aims to
raise concerns over applications of end-to-end models in safety-critical
systems.Comment: 3 pages, 2 figure
Targeted Adversarial Attacks on Wind Power Forecasts
In recent years, researchers proposed a variety of deep learning models for
wind power forecasting. These models predict the wind power generation of wind
farms or entire regions more accurately than traditional machine learning
algorithms or physical models. However, latest research has shown that deep
learning models can often be manipulated by adversarial attacks. Since wind
power forecasts are essential for the stability of modern power systems, it is
important to protect them from this threat. In this work, we investigate the
vulnerability of two different forecasting models to targeted, semitargeted,
and untargeted adversarial attacks. We consider a Long Short-Term Memory (LSTM)
network for predicting the power generation of a wind farm and a Convolutional
Neural Network (CNN) for forecasting the wind power generation throughout
Germany. Moreover, we propose the Total Adversarial Robustness Score (TARS), an
evaluation metric for quantifying the robustness of regression models to
targeted and semi-targeted adversarial attacks. It assesses the impact of
attacks on the model's performance, as well as the extent to which the
attacker's goal was achieved, by assigning a score between 0 (very vulnerable)
and 1 (very robust). In our experiments, the LSTM forecasting model was fairly
robust and achieved a TARS value of over 0.81 for all adversarial attacks
investigated. The CNN forecasting model only achieved TARS values below 0.06
when trained ordinarily, and was thus very vulnerable. Yet, its robustness
could be significantly improved by adversarial training, which always resulted
in a TARS above 0.46.Comment: 20 pages, including appendix, 12 figure