1,120 research outputs found
An Evasion and Counter-Evasion Study in Malicious Websites Detection
Malicious websites are a major cyber attack vector, and effective detection
of them is an important cyber defense task. The main defense paradigm in this
regard is that the defender uses some kind of machine learning algorithms to
train a detection model, which is then used to classify websites in question.
Unlike other settings, the following issue is inherent to the problem of
malicious websites detection: the attacker essentially has access to the same
data that the defender uses to train its detection models. This 'symmetry' can
be exploited by the attacker, at least in principle, to evade the defender's
detection models. In this paper, we present a framework for characterizing the
evasion and counter-evasion interactions between the attacker and the defender,
where the attacker attempts to evade the defender's detection models by taking
advantage of this symmetry. Within this framework, we show that an adaptive
attacker can make malicious websites evade powerful detection models, but
proactive training can be an effective counter-evasion defense mechanism. The
framework is geared toward the popular detection model of decision tree, but
can be adapted to accommodate other classifiers
Unsupervised Anomaly-based Malware Detection using Hardware Features
Recent works have shown promise in using microarchitectural execution
patterns to detect malware programs. These detectors belong to a class of
detectors known as signature-based detectors as they catch malware by comparing
a program's execution pattern (signature) to execution patterns of known
malware programs. In this work, we propose a new class of detectors -
anomaly-based hardware malware detectors - that do not require signatures for
malware detection, and thus can catch a wider range of malware including
potentially novel ones. We use unsupervised machine learning to build profiles
of normal program execution based on data from performance counters, and use
these profiles to detect significant deviations in program behavior that occur
as a result of malware exploitation. We show that real-world exploitation of
popular programs such as IE and Adobe PDF Reader on a Windows/x86 platform can
be detected with nearly perfect certainty. We also examine the limits and
challenges in implementing this approach in face of a sophisticated adversary
attempting to evade anomaly-based detection. The proposed detector is
complementary to previously proposed signature-based detectors and can be used
together to improve security.Comment: 1 page, Latex; added description for feature selection in Section 4,
results unchange
DeltaPhish: Detecting Phishing Webpages in Compromised Websites
The large-scale deployment of modern phishing attacks relies on the automatic
exploitation of vulnerable websites in the wild, to maximize profit while
hindering attack traceability, detection and blacklisting. To the best of our
knowledge, this is the first work that specifically leverages this adversarial
behavior for detection purposes. We show that phishing webpages can be
accurately detected by highlighting HTML code and visual differences with
respect to other (legitimate) pages hosted within a compromised website. Our
system, named DeltaPhish, can be installed as part of a web application
firewall, to detect the presence of anomalous content on a website after
compromise, and eventually prevent access to it. DeltaPhish is also robust
against adversarial attempts in which the HTML code of the phishing page is
carefully manipulated to evade detection. We empirically evaluate it on more
than 5,500 webpages collected in the wild from compromised websites, showing
that it is capable of detecting more than 99% of phishing webpages, while only
misclassifying less than 1% of legitimate pages. We further show that the
detection rate remains higher than 70% even under very sophisticated attacks
carefully designed to evade our system.Comment: Preprint version of the work accepted at ESORICS 201
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Learning-based pattern classifiers, including deep networks, have shown
impressive performance in several application domains, ranging from computer
vision to cybersecurity. However, it has also been shown that adversarial input
perturbations carefully crafted either at training or at test time can easily
subvert their predictions. The vulnerability of machine learning to such wild
patterns (also referred to as adversarial examples), along with the design of
suitable countermeasures, have been investigated in the research field of
adversarial machine learning. In this work, we provide a thorough overview of
the evolution of this research area over the last ten years and beyond,
starting from pioneering, earlier work on the security of non-deep learning
algorithms up to more recent work aimed to understand the security properties
of deep learning algorithms, in the context of computer vision and
cybersecurity tasks. We report interesting connections between these
apparently-different lines of work, highlighting common misconceptions related
to the security evaluation of machine-learning algorithms. We review the main
threat models and attacks defined to this end, and discuss the main limitations
of current work, along with the corresponding future challenges towards the
design of more secure learning algorithms.Comment: Accepted for publication on Pattern Recognition, 201
- …