3,812 research outputs found
Evaluating Third-Party Bad Neighborhood Blacklists for Spam Detection
The distribution of malicious hosts over the IP address space is far from being uniform. In fact, malicious hosts tend to be concentrate in certain portions of the IP address space, forming the so-called Bad Neighborhoods. This phenomenon has been previously exploited to filter Spam by means of Bad Neighborhood blacklists. In this paper, we evaluate how much a network administrator can rely upon different Bad Neighborhood blacklists generated by third-party sources to fight Spam. One could expect that Bad Neighborhood blacklists generated from different sources contain, to a varying degree, disjoint sets of entries. Therefore, we investigate (i) how specific a blacklist is to its source, and (ii) whether different blacklists can be interchangeably used to protect a target from Spam. We analyze five Bad Neighborhood blacklists generated from real-world measurements and study their effectiveness in protecting three production mail servers from Spam. Our findings lead to several operational considerations on how a network administrator could best benefit from Bad Neighborhood-based Spam filtering
Automatic Detection of Malware-Generated Domains with Recurrent Neural Models
Modern malware families often rely on domain-generation algorithms (DGAs) to
determine rendezvous points to their command-and-control server. Traditional
defence strategies (such as blacklisting domains or IP addresses) are
inadequate against such techniques due to the large and continuously changing
list of domains produced by these algorithms. This paper demonstrates that a
machine learning approach based on recurrent neural networks is able to detect
domain names generated by DGAs with high precision. The neural models are
estimated on a large training set of domains generated by various malwares.
Experimental results show that this data-driven approach can detect
malware-generated domain names with a F_1 score of 0.971. To put it
differently, the model can automatically detect 93 % of malware-generated
domain names for a false positive rate of 1:100.Comment: Submitted to NISK 201
Malware in the Future? Forecasting of Analyst Detection of Cyber Events
There have been extensive efforts in government, academia, and industry to
anticipate, forecast, and mitigate cyber attacks. A common approach is
time-series forecasting of cyber attacks based on data from network telescopes,
honeypots, and automated intrusion detection/prevention systems. This research
has uncovered key insights such as systematicity in cyber attacks. Here, we
propose an alternate perspective of this problem by performing forecasting of
attacks that are analyst-detected and -verified occurrences of malware. We call
these instances of malware cyber event data. Specifically, our dataset was
analyst-detected incidents from a large operational Computer Security Service
Provider (CSSP) for the U.S. Department of Defense, which rarely relies only on
automated systems. Our data set consists of weekly counts of cyber events over
approximately seven years. Since all cyber events were validated by analysts,
our dataset is unlikely to have false positives which are often endemic in
other sources of data. Further, the higher-quality data could be used for a
number for resource allocation, estimation of security resources, and the
development of effective risk-management strategies. We used a Bayesian State
Space Model for forecasting and found that events one week ahead could be
predicted. To quantify bursts, we used a Markov model. Our findings of
systematicity in analyst-detected cyber attacks are consistent with previous
work using other sources. The advanced information provided by a forecast may
help with threat awareness by providing a probable value and range for future
cyber events one week ahead. Other potential applications for cyber event
forecasting include proactive allocation of resources and capabilities for
cyber defense (e.g., analyst staffing and sensor configuration) in CSSPs.
Enhanced threat awareness may improve cybersecurity.Comment: Revised version resubmitted to journa
ATTACK2VEC: Leveraging Temporal Word Embeddings to Understand the Evolution of Cyberattacks
Despite the fact that cyberattacks are constantly growing in complexity, the
research community still lacks effective tools to easily monitor and understand
them. In particular, there is a need for techniques that are able to not only
track how prominently certain malicious actions, such as the exploitation of
specific vulnerabilities, are exploited in the wild, but also (and more
importantly) how these malicious actions factor in as attack steps in more
complex cyberattacks. In this paper we present ATTACK2VEC, a system that uses
temporal word embeddings to model how attack steps are exploited in the wild,
and track how they evolve. We test ATTACK2VEC on a dataset of billions of
security events collected from the customers of a commercial Intrusion
Prevention System over a period of two years, and show that our approach is
effective in monitoring the emergence of new attack strategies in the wild and
in flagging which attack steps are often used together by attackers (e.g.,
vulnerabilities that are frequently exploited together). ATTACK2VEC provides a
useful tool for researchers and practitioners to better understand cyberattacks
and their evolution, and use this knowledge to improve situational awareness
and develop proactive defenses
Network entity characterization and attack prediction
The devastating effects of cyber-attacks, highlight the need for novel attack
detection and prevention techniques. Over the last years, considerable work has
been done in the areas of attack detection as well as in collaborative defense.
However, an analysis of the state of the art suggests that many challenges
exist in prioritizing alert data and in studying the relation between a
recently discovered attack and the probability of it occurring again. In this
article, we propose a system that is intended for characterizing network
entities and the likelihood that they will behave maliciously in the future.
Our system, namely Network Entity Reputation Database System (NERDS), takes
into account all the available information regarding a network entity (e. g. IP
address) to calculate the probability that it will act maliciously. The latter
part is achieved via the utilization of machine learning. Our experimental
results show that it is indeed possible to precisely estimate the probability
of future attacks from each entity using information about its previous
malicious behavior and other characteristics. Ranking the entities by this
probability has practical applications in alert prioritization, assembly of
highly effective blacklists of a limited length and other use cases.Comment: 30 pages, 8 figure
Attack2vec: Leveraging temporal word embeddings to understand the evolution of cyberattacks
Despite the fact that cyberattacks are constantly growing in complexity, the research community still lacks effective tools to easily monitor and understand them. In particular, there is a need for techniques that are able to not only track how prominently certain malicious actions, such as the exploitation of specific vulnerabilities, are exploited in the wild, but also (and more importantly) how these malicious actions factor in as attack steps in more complex cyberattacks. In this paper we present ATTACK2VEC, a system that uses temporal word embeddings to model how attack steps are exploited in the wild, and track how they evolve. We test ATTACK2VEC on a dataset
of billions of security events collected from the customers of a commercial Intrusion Prevention System over a period of two years, and show that our approach is effective in monitoring the emergence of new attack strategies in the wild and in flagging which attack steps are often used together by attackers (e.g., vulnerabilities that are frequently exploited together). ATTACK2VEC provides a useful tool for researchers and practitioners to better
understand cyberattacks and their evolution, and use this knowledge to improve situational awareness and develop proactive defenses.Accepted manuscrip
Predictive Methods in Cyber Defense: Current Experience and Research Challenges
Predictive analysis allows next-generation cyber defense that is more proactive than current approaches based on intrusion detection. In this paper, we discuss various aspects of predictive methods in cyber defense and illustrate them on three examples of recent approaches. The first approach uses data mining to extract frequent attack scenarios and uses them to project ongoing cyberattacks. The second approach uses a dynamic network entity reputation score to predict malicious actors. The third approach uses time series analysis to forecast attack rates in the network. This paper presents a unique evaluation of the three distinct methods in a common environment of an intrusion detection alert sharing platform, which allows for a comparison of the approaches and illustrates the capabilities of predictive analysis for current and future research and cybersecurity operations. Our experiments show that all three methods achieved a sufficient technology readiness level for experimental deployment in an operational setting with promising accuracy and usability. Namely prediction and projection methods, despite their differences, are highly usable for predictive blacklisting, the first provides a more detailed output, and the second is more extensible. Network security situation forecasting is lightweight and displays very high accuracy, but does not provide details on predicted events
Adversarial behaviours knowledge area
The technological advancements witnessed by our society in recent decades have brought
improvements in our quality of life, but they have also created a number of opportunities for
attackers to cause harm. Before the Internet revolution, most crime and malicious activity
generally required a victim and a perpetrator to come into physical contact, and this limited
the reach that malicious parties had. Technology has removed the need for physical contact
to perform many types of crime, and now attackers can reach victims anywhere in the world, as long as they are connected to the Internet. This has revolutionised the characteristics of crime and warfare, allowing operations that would not have been possible before. In this document, we provide an overview of the malicious operations that are happening on the Internet today. We first provide a taxonomy of malicious activities based on the attacker’s motivations and capabilities, and then move on to the technological and human elements that adversaries require to run a successful operation. We then discuss a number of frameworks that have been proposed to model malicious operations. Since adversarial behaviours are not a purely technical topic, we draw from research in a number of fields (computer science, criminology, war studies). While doing this, we discuss how these frameworks can be used by researchers and practitioners to develop effective mitigations against malicious online operations.Published versio
Neural Network Models for TCP - SYN Flood Denial of Service Attack Detection With Source and Destination Anonymization
The Internet has become a necessity in today\u27s digital world, and making it secure is a pressing concern. Hackers are investing ever-increasing efforts to compromise Internet nodes with novel techniques. According to Forbes, every minute, $ 2,900,000 is lost to cybercrime. A common cyber-attack is Denial of Service (DoS) or Distributed Denial of Service (DDoS) attacks, which may bring a network to a standstill, and unless mitigated, network services could be halted for an extended period. The attack can occur at any layer of the OSI model. This thesis focuses on SYN Flood DoS/DDoS attacks, also known as TCP Flood attacks, and studies the use of artificial neural networks to detect the attacks. Specific neural network models used in this thesis are the Gated Recurrent Units (GRU), the Long Short-Term Memory (LSTM), and a semi-supervised model on label propagation. All neural network models detect attacks by analyzing the individual hexadecimal values in the packet header. A novelty of the approach followed in this thesis is that the neural networks do not consider the lexical values of the network packet (MAC addresses, IP addresses, and port numbers) as input features in their traffic analysis. Instead, the neural network models are designed and trained to detect malicious traffic based on the time pattern of TCP flags. The neural networks base their analysis of traffic on time-sequenced patterns. An important hyperparameter discussed in this paper is the size of the lookup window, that is, the number of past packets the model can access to predict the next packet. Evaluation results based on datasets presented in this thesis show that the accuracies of the GRU, CNN/LSTM, and label propagation models are 81%, 93%, and 96%, respectively
- …