Detection of Physical Adversarial Attacks on Traffic Signs for Autonomous Vehicles

Abstract

Current vision-based detection models within Autonomous Vehicles, can be susceptible to changes within the physical environment, which cause unexpected issues. Physical attacks on traffic signs could be malicious or naturally occurring, causing incorrect identification of the traffic sign which can drastically alter the behaviour of the autonomous vehicle. We propose two novel deep learning architectures which can be used as detection and mitigation strategy for environmental attacks. The first is an autoencoder which detects anomalies within a given traffic sign, and the second is a reconstruction model which generates a clean traffic sign without any anomalies. As the anomaly detection model has been trained on normal images, any abnormalities will provide a high reconstruction error value, indicating an abnormal traffic sign. The reconstruction model is a Generative Adversarial Network (GAN) and consists of two networks; a generator and a discriminator. These map the input traffic sign image into a meta representation as the output. By using anomaly detection and reconstruction models as mitigation strategies, we show that the performance of the other models in pipelines such as traffic sign recognition models can be significantly improved. In order to evaluate our models, several types of attack circumstances were designed and on average, the anomaly detection model achieved 0.84 accuracy with a 0.82 F1-score in real datasets whereas the reconstruction model improved performance of traffic sign recognition model from average F1-score 0.41 to 0.641

    Similar works