1,286 research outputs found
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Transferability captures the ability of an attack against a machine-learning
model to be effective against a different, potentially unknown, model.
Empirical evidence for transferability has been shown in previous work, but the
underlying reasons why an attack transfers or not are not yet well understood.
In this paper, we present a comprehensive analysis aimed to investigate the
transferability of both test-time evasion and training-time poisoning attacks.
We provide a unifying optimization framework for evasion and poisoning attacks,
and a formal definition of transferability of such attacks. We highlight two
main factors contributing to attack transferability: the intrinsic adversarial
vulnerability of the target model, and the complexity of the surrogate model
used to optimize the attack. Based on these insights, we define three metrics
that impact an attack's transferability. Interestingly, our results derived
from theoretical analysis hold for both evasion and poisoning attacks, and are
confirmed experimentally using a wide range of linear and non-linear
classifiers and datasets
Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems
The proliferation and application of machine learning based Intrusion
Detection Systems (IDS) have allowed for more flexibility and efficiency in the
automated detection of cyber attacks in Industrial Control Systems (ICS).
However, the introduction of such IDSs has also created an additional attack
vector; the learning models may also be subject to cyber attacks, otherwise
referred to as Adversarial Machine Learning (AML). Such attacks may have severe
consequences in ICS systems, as adversaries could potentially bypass the IDS.
This could lead to delayed attack detection which may result in infrastructure
damages, financial loss, and even loss of life. This paper explores how
adversarial learning can be used to target supervised models by generating
adversarial samples using the Jacobian-based Saliency Map attack and exploring
classification behaviours. The analysis also includes the exploration of how
such samples can support the robustness of supervised models using adversarial
training. An authentic power system dataset was used to support the experiments
presented herein. Overall, the classification performance of two widely used
classifiers, Random Forest and J48, decreased by 16 and 20 percentage points
when adversarial samples were present. Their performances improved following
adversarial training, demonstrating their robustness towards such attacks.Comment: 9 pages. 7 figures. 7 tables. 46 references. Submitted to a special
issue Journal of Information Security and Applications, Machine Learning
Techniques for Cyber Security: Challenges and Future Trends, Elsevie
Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition
Neural models enjoy widespread use across a variety of tasks and have grown
to become crucial components of many industrial systems. Despite their
effectiveness and extensive popularity, they are not without their exploitable
flaws. Initially applied to computer vision systems, the generation of
adversarial examples is a process in which seemingly imperceptible
perturbations are made to an image, with the purpose of inducing a deep
learning based classifier to misclassify the image. Due to recent trends in
speech processing, this has become a noticeable issue in speech recognition
models. In late 2017, an attack was shown to be quite effective against the
Speech Commands classification model. Limited-vocabulary speech classifiers,
such as the Speech Commands model, are used quite frequently in a variety of
applications, particularly in managing automated attendants in telephony
contexts. As such, adversarial examples produced by this attack could have
real-world consequences. While previous work in defending against these
adversarial examples has investigated using audio preprocessing to reduce or
distort adversarial noise, this work explores the idea of flooding particular
frequency bands of an audio signal with random noise in order to detect
adversarial examples. This technique of flooding, which does not require
retraining or modifying the model, is inspired by work done in computer vision
and builds on the idea that speech classifiers are relatively robust to natural
noise. A combined defense incorporating 5 different frequency bands for
flooding the signal with noise outperformed other existing defenses in the
audio space, detecting adversarial examples with 91.8% precision and 93.5%
recall.Comment: Orally presented at the 18th IEEE International Symposium on Signal
Processing and Information Technology (ISSPIT) in Louisville, Kentucky, USA,
December 2018. 5 pages, 2 figure
How Unique is Your .onion? An Analysis of the Fingerprintability of Tor Onion Services
Recent studies have shown that Tor onion (hidden) service websites are
particularly vulnerable to website fingerprinting attacks due to their limited
number and sensitive nature. In this work we present a multi-level feature
analysis of onion site fingerprintability, considering three state-of-the-art
website fingerprinting methods and 482 Tor onion services, making this the
largest analysis of this kind completed on onion services to date.
Prior studies typically report average performance results for a given
website fingerprinting method or countermeasure. We investigate which sites are
more or less vulnerable to fingerprinting and which features make them so. We
find that there is a high variability in the rate at which sites are classified
(and misclassified) by these attacks, implying that average performance figures
may not be informative of the risks that website fingerprinting attacks pose to
particular sites.
We analyze the features exploited by the different website fingerprinting
methods and discuss what makes onion service sites more or less easily
identifiable, both in terms of their traffic traces as well as their webpage
design. We study misclassifications to understand how onion service sites can
be redesigned to be less vulnerable to website fingerprinting attacks. Our
results also inform the design of website fingerprinting countermeasures and
their evaluation considering disparate impact across sites.Comment: Accepted by ACM CCS 201
- …