1,363 research outputs found
Security Evaluation of Support Vector Machines in Adversarial Environments
Support Vector Machines (SVMs) are among the most popular classification
techniques adopted in security applications like malware detection, intrusion
detection, and spam filtering. However, if SVMs are to be incorporated in
real-world security systems, they must be able to cope with attack patterns
that can either mislead the learning algorithm (poisoning), evade detection
(evasion), or gain information about their internal parameters (privacy
breaches). The main contributions of this chapter are twofold. First, we
introduce a formal general framework for the empirical evaluation of the
security of machine-learning systems. Second, according to our framework, we
demonstrate the feasibility of evasion, poisoning and privacy attacks against
SVMs in real-world security problems. For each attack technique, we evaluate
its impact and discuss whether (and how) it can be countered through an
adversary-aware design of SVMs. Our experiments are easily reproducible thanks
to open-source code that we have made available, together with all the employed
datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector
Machine Applications
Exploiting Machine Learning to Subvert Your Spam Filter
Using statistical machine learning for making security decisions introduces new vulnerabilities in large scale systems. This paper shows how an adversary can exploit statistical machine learning, as used in the SpamBayes spam filter, to render it useless—even if the adversary’s access is limited to only 1 % of the training messages. We further demonstrate a new class of focused attacks that successfully prevent victims from receiving specific email messages. Finally, we introduce two new types of defenses against these attacks.
Keyed Non-Parametric Hypothesis Tests
The recent popularity of machine learning calls for a deeper understanding of
AI security. Amongst the numerous AI threats published so far, poisoning
attacks currently attract considerable attention. In a poisoning attack the
opponent partially tampers the dataset used for learning to mislead the
classifier during the testing phase.
This paper proposes a new protection strategy against poisoning attacks. The
technique relies on a new primitive called keyed non-parametric hypothesis
tests allowing to evaluate under adversarial conditions the training input's
conformance with a previously learned distribution . To do so we
use a secret key unknown to the opponent.
Keyed non-parametric hypothesis tests differs from classical tests in that
the secrecy of prevents the opponent from misleading the keyed test
into concluding that a (significantly) tampered dataset belongs to
.Comment: Paper published in NSS 201
- …