8,682 research outputs found
Robustness of Adversarial Attacks in Sound Event Classification
An adversarial attack is a method to generate perturbations to the input of a machine learning model in order to make the output of the model incorrect. The perturbed inputs are known as adversarial examples. In this paper, we investigate the robustness of adversarial examples to simple input transformations such as mp3 compression, resampling, white noise and reverb in the task of sound event classification. By performing this analysis, we aim to provide insight on strengths and weaknesses in current adversarial attack algorithms as well as provide a baseline for defenses against adversarial attacks. Our work shows that adversarial attacks are not robust to simple input transformations. White noise is the most consistent method to defend against adversarial attacks with a success rate of averaged across all models and attack algorithms.23924
A Study on the Transferability of Adversarial Attacks in Sound Event Classification
An adversarial attack is an algorithm that perturbs the input of a machine learning model in an intelligent way in order to change the output of the model. An important property of adversarial attacks is transferability. According to this property, it is possible to generate adversarial perturbations on one model and apply it the input to fool the output of a different model. Our work focuses on studying the transferability of adversarial attacks in sound event classification. We are able to demonstrate differences in transferability properties from those observed in computer vision. We show that dataset normalization techniques such as z-score normalization does not affect the transferability of adversarial attacks and we show that techniques such as knowledge distillation do not increase the transferability of attacks
Robustness of Adversarial Attacks in Sound Event Classification
An adversarial attack is a method to generate perturbations to the input of a machine learning model in order to make the output of the model incorrect. The perturbed inputs are known as adversarial examples. In this paper, we investigate the robustness of adversarial examples to simple input transformations such as mp3 compression, resampling, white noise and reverb in the task of sound event classification. By performing this analysis, we aim to provide insights on strengths and weaknesses in current adversarial attack algorithms as well as provide a baseline for defenses against adversarial attacks. Our work shows that adversarial attacks are not robust to simple input transformations. White noise is the most consistent method to defend against adversarial attacks with a success rate of 73.72% averaged across all models and attack algorithms
A Graphical Adversarial Risk Analysis Model for Oil and Gas Drilling Cybersecurity
Oil and gas drilling is based, increasingly, on operational technology, whose
cybersecurity is complicated by several challenges. We propose a graphical
model for cybersecurity risk assessment based on Adversarial Risk Analysis to
face those challenges. We also provide an example of the model in the context
of an offshore drilling rig. The proposed model provides a more formal and
comprehensive analysis of risks, still using the standard business language
based on decisions, risks, and value.Comment: In Proceedings GraMSec 2014, arXiv:1404.163
- …