1,213 research outputs found
An Analysis of Rogue AV Campaigns
Rogue antivirus software has recently received extensive attention, justified by the diffusion and efficacy of its propagation. We present a longitudinal analysis of the rogue antivirus threat ecosystem, focusing on the structure and dynamics of this threat and its economics. To that end, we compiled and mined a large dataset of characteristics of rogue antivirus domains and of the servers that host them. The contributions of this paper are threefold. Firstly, we offer the first, to our knowledge, broad analysis of the infrastructure underpinning the distribution of rogue security software by tracking 6,500 malicious domains. Secondly, we show how to apply attack attribution methodologies to correlate campaigns likely to be associated to the same individuals or groups. By using these techniques, we identify 127 rogue security software campaigns comprising 4,549 domains. Finally, we contextualize our findings by comparing them to a different threat ecosystem, that of browser exploits. We underline the profound difference in the structure of the two threats, and we investigate the root causes of this difference by analyzing the economic balance of the rogue antivirus ecosystem. We track 372,096 victims over a period of 2 months and we take advantage of this information to retrieve monetization insights. While applied to a specific threat type, the methodology and the lessons learned from this work are of general applicability to develop a better understanding of the threat economies
Command & Control: Understanding, Denying and Detecting - A review of malware C2 techniques, detection and defences
In this survey, we first briefly review the current state of cyber attacks,
highlighting significant recent changes in how and why such attacks are
performed. We then investigate the mechanics of malware command and control
(C2) establishment: we provide a comprehensive review of the techniques used by
attackers to set up such a channel and to hide its presence from the attacked
parties and the security tools they use. We then switch to the defensive side
of the problem, and review approaches that have been proposed for the detection
and disruption of C2 channels. We also map such techniques to widely-adopted
security controls, emphasizing gaps or limitations (and success stories) in
current best practices.Comment: Work commissioned by CPNI, available at c2report.org. 38 pages.
Listing abstract compressed from version appearing in repor
Recommended from our members
Survey of Approaches and Features for the Identification of HTTP-Based Botnet Traffic
Botnet use is on the rise, with a growing number of botmasters now switching to the HTTP-based C&C infrastructure. This offers them more stealth by allowing them to blend in with benign web traffic. Several works have been carried out aimed at characterising or detecting HTTP-based bots, many of which use network communication features as identifiers of botnet behaviour. In this paper, we present a survey of these approaches and the network features they use in order to highlight how botnet traffic is currently differentiated from normal traffic. We classify papers by traffic types, and provide a breakdown of features by protocol. In doing so, we hope to highlight the relationships between features at the application, transport and network layers
Hiding in Plain Sight: A Longitudinal Study of Combosquatting Abuse
Domain squatting is a common adversarial practice where attackers register
domain names that are purposefully similar to popular domains. In this work, we
study a specific type of domain squatting called "combosquatting," in which
attackers register domains that combine a popular trademark with one or more
phrases (e.g., betterfacebook[.]com, youtube-live[.]com). We perform the first
large-scale, empirical study of combosquatting by analyzing more than 468
billion DNS records---collected from passive and active DNS data sources over
almost six years. We find that almost 60% of abusive combosquatting domains
live for more than 1,000 days, and even worse, we observe increased activity
associated with combosquatting year over year. Moreover, we show that
combosquatting is used to perform a spectrum of different types of abuse
including phishing, social engineering, affiliate abuse, trademark abuse, and
even advanced persistent threats. Our results suggest that combosquatting is a
real problem that requires increased scrutiny by the security community.Comment: ACM CCS 1
An Evasion and Counter-Evasion Study in Malicious Websites Detection
Malicious websites are a major cyber attack vector, and effective detection
of them is an important cyber defense task. The main defense paradigm in this
regard is that the defender uses some kind of machine learning algorithms to
train a detection model, which is then used to classify websites in question.
Unlike other settings, the following issue is inherent to the problem of
malicious websites detection: the attacker essentially has access to the same
data that the defender uses to train its detection models. This 'symmetry' can
be exploited by the attacker, at least in principle, to evade the defender's
detection models. In this paper, we present a framework for characterizing the
evasion and counter-evasion interactions between the attacker and the defender,
where the attacker attempts to evade the defender's detection models by taking
advantage of this symmetry. Within this framework, we show that an adaptive
attacker can make malicious websites evade powerful detection models, but
proactive training can be an effective counter-evasion defense mechanism. The
framework is geared toward the popular detection model of decision tree, but
can be adapted to accommodate other classifiers
Anatomy of an internet hijack and interception attack: A global and educational perspective
The Internet’s underlying vulnerable protocol infrastructure is a rich target for cyber crime, cyber espionage and cyber warfare operations. The stability and security of the Internet infrastructure are important to the function of global matters of state, critical infrastructure, global e-commerce and election systems. There are global approaches to tackle Internet security challenges that include governance, law, educational and technical perspectives. This paper reviews a number of approaches to these challenges, the increasingly surgical attacks that target the underlying vulnerable protocol infrastructure of the Internet, and the extant cyber security education curricula; we find the majority of predominant cyber security education frameworks do not address security for the Internet’s critical communication system, the Border Gateway Protocol (BGP). Finally, we present a case study as an anatomy of such an attack. The case study can be implemented ethically and safely for educational purposes
Cost-effective Detection of Drive-by-Download Attacks with Hybrid Client Honeypots
With the increasing connectivity of and reliance on computers and networks,
important aspects of computer systems are under a constant threat.
In particular, drive-by-download attacks have emerged as a new threat to
the integrity of computer systems. Drive-by-download attacks are clientside
attacks that originate fromweb servers that are visited byweb browsers.
As a vulnerable web browser retrieves a malicious web page, the malicious
web server can push malware to a user's machine that can be executed
without their notice or consent.
The detection of malicious web pages that exist on the Internet is prohibitively
expensive. It is estimated that approximately 150 million malicious
web pages that launch drive-by-download attacks exist today. Socalled
high-interaction client honeypots are devices that are able to detect
these malicious web pages, but they are slow and known to miss attacks.
Detection ofmaliciousweb pages in these quantitieswith client honeypots
would cost millions of US dollars.
Therefore, we have designed a more scalable system called a hybrid
client honeypot. It consists of lightweight client honeypots, the so-called
low-interaction client honeypots, and traditional high-interaction client
honeypots. The lightweight low-interaction client honeypots inspect web
pages at high speed and forward only likely malicious web pages to the
high-interaction client honeypot for a final classification.
For the comparison of client honeypots and evaluation of the hybrid
client honeypot system, we have chosen a cost-based evaluation method:
the true positive cost curve (TPCC). It allows us to evaluate client honeypots
against their primary purpose of identification of malicious web
pages. We show that costs of identifying malicious web pages with the
developed hybrid client honeypot systems are reduced by a factor of nine
compared to traditional high-interaction client honeypots.
The five main contributions of our work are:
High-Interaction Client Honeypot The first main contribution of
our work is the design and implementation of a high-interaction
client honeypot Capture-HPC. It is an open-source, publicly available
client honeypot research platform, which allows researchers and
security professionals to conduct research on malicious web pages
and client honeypots. Based on our client honeypot implementation
and analysis of existing client honeypots, we developed a component
model of client honeypots. This model allows researchers to
agree on the object of study, allows for focus of specific areas within
the object of study, and provides a framework for communication of
research around client honeypots.
True Positive Cost Curve As mentioned above, we have chosen a
cost-based evaluationmethod to compare and evaluate client honeypots
against their primary purpose of identification ofmaliciousweb
pages: the true positive cost curve. It takes into account the unique
characteristics of client honeypots, speed, detection accuracy, and resource
cost and provides a simple, cost-based mechanism to evaluate
and compare client honeypots in an operating environment. As
such, the TPCC provides a foundation for improving client honeypot
technology. The TPCC is the second main contribution of our work.
Mitigation of Risks to the Experimental Design with HAZOP - Mitigation
of risks to internal and external validity on the experimental
design using hazard and operability (HAZOP) study is the third
main contribution. This methodology addresses risks to intent (internal
validity) as well as generalizability of results beyond the experimental
setting (external validity) in a systematic and thorough
manner.
Low-Interaction Client Honeypots - Malicious web pages are usually
part of a malware distribution network that consists of several
servers that are involved as part of the drive-by-download attack.
Development and evaluation of classification methods that assess
whether a web page is part of a malware distribution network is the
fourth main contribution.
Hybrid Client Honeypot System - The fifth main contribution is the
hybrid client honeypot system. It incorporates the mentioned classification
methods in the form of a low-interaction client honeypot
and a high-interaction client honeypot into a hybrid client honeypot
systemthat is capable of identifying malicious web pages in a cost effective
way on a large scale. The hybrid client honeypot system outperforms
a high-interaction client honeypot with identical resources
and identical false positive rate
Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments
Decentralized systems are a subset of distributed systems where multiple
authorities control different components and no authority is fully trusted by
all. This implies that any component in a decentralized system is potentially
adversarial. We revise fifteen years of research on decentralization and
privacy, and provide an overview of key systems, as well as key insights for
designers of future systems. We show that decentralized designs can enhance
privacy, integrity, and availability but also require careful trade-offs in
terms of system complexity, properties provided, and degree of
decentralization. These trade-offs need to be understood and navigated by
designers. We argue that a combination of insights from cryptography,
distributed systems, and mechanism design, aligned with the development of
adequate incentives, are necessary to build scalable and successful
privacy-preserving decentralized systems
Anatomy of an Internet Hijack And Interception Attack: A Global And Educational Perspective
The Internet’s underlying vulnerable protocol infrastructure is a rich target for cyber crime, cyber espionage and cyber warfare operations. The stability and security of the Internet infrastructure are important to the function of global matters of state, critical infrastructure, global e-commerce and election systems. There are global approaches to tackle Internet security challenges that include governance, law, educational and technical perspectives. This paper reviews a number of approaches to these challenges, the increasingly surgical attacks that target the underlying vulnerable protocol infrastructure of the Internet, and the extant cyber security education curricula; we find the majority of predominant cyber security education frameworks do not address security for the Internet’s critical communication system, the Border Gateway Protocol (BGP). Finally, we present a case study as an anatomy of such an attack. The case study can be implemented ethically and safely for educational purposes
- …