1,395 research outputs found
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Learning-based pattern classifiers, including deep networks, have shown
impressive performance in several application domains, ranging from computer
vision to cybersecurity. However, it has also been shown that adversarial input
perturbations carefully crafted either at training or at test time can easily
subvert their predictions. The vulnerability of machine learning to such wild
patterns (also referred to as adversarial examples), along with the design of
suitable countermeasures, have been investigated in the research field of
adversarial machine learning. In this work, we provide a thorough overview of
the evolution of this research area over the last ten years and beyond,
starting from pioneering, earlier work on the security of non-deep learning
algorithms up to more recent work aimed to understand the security properties
of deep learning algorithms, in the context of computer vision and
cybersecurity tasks. We report interesting connections between these
apparently-different lines of work, highlighting common misconceptions related
to the security evaluation of machine-learning algorithms. We review the main
threat models and attacks defined to this end, and discuss the main limitations
of current work, along with the corresponding future challenges towards the
design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201
Security Evaluation of Support Vector Machines in Adversarial Environments
Support Vector Machines (SVMs) are among the most popular classification
techniques adopted in security applications like malware detection, intrusion
detection, and spam filtering. However, if SVMs are to be incorporated in
real-world security systems, they must be able to cope with attack patterns
that can either mislead the learning algorithm (poisoning), evade detection
(evasion), or gain information about their internal parameters (privacy
breaches). The main contributions of this chapter are twofold. First, we
introduce a formal general framework for the empirical evaluation of the
security of machine-learning systems. Second, according to our framework, we
demonstrate the feasibility of evasion, poisoning and privacy attacks against
SVMs in real-world security problems. For each attack technique, we evaluate
its impact and discuss whether (and how) it can be countered through an
adversary-aware design of SVMs. Our experiments are easily reproducible thanks
to open-source code that we have made available, together with all the employed
datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector
Machine Applications
On traffic analysis attacks and countermeasures
Security and privacy have gained more and more attention with the rapid growth and
public acceptance of the Internet as a means of communication and information
dissemination. Security and privacy of a computing or network system may be
compromised by a variety of well-crafted attacks.
In this dissertation, we address issues related to security and privacy in computer
network systems. Specifically, we model and analyze a special group of network attacks,
known as traffic analysis attacks, and develop and evaluate their countermeasures.
Traffic analysis attacks aim to derive critical information by analyzing traffic over a
network. We focus our study on two classes of traffic analysis attacks: link-load analysis
attacks and flow-connectivity analysis attacks.
Our research has made the following conclusions:
1. We have found that an adversary may effectively discover link load by passively
analyzing selected statistics of packet inter-arrival times of traffic flows on a
network link. This is true even if some commonly used countermeasures (e.g.,
link padding) have been deployed. We proposed an alternative effective countermeasure to counter this passive traffic analysis attack. Our extensive
experimental results indicated this to be an effective approach.
2. Our newly proposed countermeasure may not be effective against active traffic
analysis attacks, which an adversary may also use to discover the link load. We
developed methodologies in countering these kinds of active attacks.
3. To detect the connectivity of a flow, an adversary may embed a recognizable
pattern of marks into traffic flows by interference. We have proposed new
countermeasures based on the digital filtering technology. Experimental results
have demonstrated the effectiveness of our method.
From our research, it is obvious that traffic analysis attacks present a serious
challenge to the design of a secured computer network system. It is the objective of this
study to develop robust but cost-effective solutions to counter link-load analysis attacks
and flow-connectivity analysis attacks. It is our belief that our methodology can provide
a solid foundation for studying the entire spectrum of traffic analysis attacks and their
countermeasures
A deception based framework for the application of deceptive countermeasures in 802.11b wireless networks
The advance of 802.11 b wireless networking has been beset by inherent and in-built security problems. Network security tools that are freely available may intercept network transmissions readily and stealthily, making organisations highly vulnerable to attack. Therefore, it is incumbent upon defending organisations to take initiative and implement proactive defences against common network attacks. Deception is an essential element of effective security that has been widely used in networks to understand attack methods and intrusions. However, little thought has been given to the type and the effectiveness of the deception. Deceptions deployed in nature, the military and in cyberspace were investigated to provide an understanding of how deception may be used in network security. Deceptive network countermeasures and attacks may then be tested on a wireless honeypot as an investigation into the effectiveness of deceptions used in network security. A structured framework, that describes the type of deception and its modus operandi, was utilised to deploy existing honeypot technologies for intrusion detection. Network countermeasures and attacks were mapped to deception types in the framework. This enabled the honeypot to appear as a realistic network and deceive targets in varying deceptive conditions. The investigation was to determine if particular deceptive countermeasures may reduce the effectiveness of particular attacks. The effectiveness of deceptions was measured, and determined by the honeypot\u27s ability to fool the attacking tools used. This was done using brute force network attacks on the wireless honeypot. The attack tools provided quantifiable forensic data from network sniffing, scans, and probes of the wireless honeypot. The aim was to deceive the attack tools into believing a wireless network existed, and contained vulnerabilities that may be further exploited by the naive attacker
OnionBots: Subverting Privacy Infrastructure for Cyber Attacks
Over the last decade botnets survived by adopting a sequence of increasingly
sophisticated strategies to evade detection and take overs, and to monetize
their infrastructure. At the same time, the success of privacy infrastructures
such as Tor opened the door to illegal activities, including botnets,
ransomware, and a marketplace for drugs and contraband. We contend that the
next waves of botnets will extensively subvert privacy infrastructure and
cryptographic mechanisms. In this work we propose to preemptively investigate
the design and mitigation of such botnets. We first, introduce OnionBots, what
we believe will be the next generation of resilient, stealthy botnets.
OnionBots use privacy infrastructures for cyber attacks by completely
decoupling their operation from the infected host IP address and by carrying
traffic that does not leak information about its source, destination, and
nature. Such bots live symbiotically within the privacy infrastructures to
evade detection, measurement, scale estimation, observation, and in general all
IP-based current mitigation techniques. Furthermore, we show that with an
adequate self-healing network maintenance scheme, that is simple to implement,
OnionBots achieve a low diameter and a low degree and are robust to
partitioning under node deletions. We developed a mitigation technique, called
SOAP, that neutralizes the nodes of the basic OnionBots. We also outline and
discuss a set of techniques that can enable subsequent waves of Super
OnionBots. In light of the potential of such botnets, we believe that the
research community should proactively develop detection and mitigation methods
to thwart OnionBots, potentially making adjustments to privacy infrastructure.Comment: 12 pages, 8 figure
- …