21 research outputs found

    Towards automated distributed containment of zero-day network worms

    Get PDF
    Worms are a serious potential threat to computer network security. The high potential speed of propagation of worms and their ability to self-replicate make them highly infectious. Zero-day worms represent a particularly challenging class of such malware, with the cost of a single worm outbreak estimated to be as high as US$2.6 Billion. In this paper, we present a distributed automated worm detection and containment scheme that is based on the correlation of Domain Name System (DNS) queries and the destination IP address of outgoing TCP SYN and UDP datagrams leaving the network boundary. The proposed countermeasure scheme also utilizes cooperation between different communicating scheme members using a custom protocol, which we term Friends. The absence of a DNS lookup action prior to an outgoing TCP SYN or UDP datagram to a new destination IP addresses is used as a behavioral signature for a rate limiting mechanism while the Friends protocol spreads reports of the event to potentially vulnerable uninfected peer networks within the scheme. To our knowledge, this is the first implementation of such a scheme. We conducted empirical experiments across six class C networks by using a Slammer-like pseudo-worm to evaluate the performance of the proposed scheme. The results show a significant reduction in the worm infection, when the countermeasure scheme is invoked

    'Cooperative Automated worm Response and Detection ImmuNe ALgorithm (CARDINAL) inspired by T-cell Immunity and Tolerance'

    Get PDF
    The role of T-cells within the immune system is to confirm and assess anomalous situations and then either respond to or tolerate the source of the effect. To illustrate how these mechanisms can be harnessed to solve real-world problems, we present the blueprint of a T-cell inspired algorithm for computer security worm detection. We show how the three central T-cell processes, namely T-cell maturation, differentiation and proliferation, naturally map into this domain and further illustrate how such an algorithm fits into a complete immune inspired computer security system and framework

    The Framework for Simulation of Bioinspired Security Mechanisms against Network Infrastructure Attacks

    Get PDF
    The paper outlines a bioinspired approach named “network nervous system" and methods of simulation of infrastructure attacks and protection mechanisms based on this approach. The protection mechanisms based on this approach consist of distributed prosedures of information collection and processing, which coordinate the activities of the main devices of a computer network, identify attacks, and determine nessesary countermeasures. Attacks and protection mechanisms are specified as structural models using a set-theoretic approach. An environment for simulation of protection mechanisms based on the biological metaphor is considered; the experiments demonstrating the effectiveness of the protection mechanisms are described

    Flexible Network Monitoring with FLAME

    Get PDF
    Increases in scale, complexity, dependency and security for networks have motivated increased automation of activities such as network monitoring. We have employed technology derived from active networking research to develop a series of network monitoring systems, but unlike most previous work, made application needs the priority over infrastructure properties. This choice has produced the following results: (1) the techniques for general infrastructure are both applicable and portable to specific applications such as network monitoring; (2) tradeoffs can benefit our applications while preserving considerable flexibility; and (3) careful engineering allows applications with open architectures to perform competitively with custom-built static implementations. These results are demonstrated via measurements of the lightweight active measurement environment (LAME), its successor, flexible LAME (FLAME), and their application to monitoring for performance and security

    When Gossip is Good: Distributed Probabilistic Inference for Detection of Slow Network Intrusions

    Get PDF
    Abstract Intrusion attempts due to self-propagating code are becoming an increasingly urgent problem, in part due to the homogeneous makeup of the internet. Recent advances in anomalybased intrusion detection systems (IDSs) have made use of the quickly spreading nature of these attacks to identify them with high sensitivity and at low false positive (FP) rates. However, slowly propagating attacks are much more difficult to detect because they are cloaked under the veil of normal network traffic, yet can be just as dangerous due to their exponential spread pattern. We extend the idea of using collaborative IDSs to corroborate the likelihood of attack by imbuing end hosts with probabilistic graphical models and using random messaging to gossip state among peer detectors. We show that such a system is able to boost a weak anomaly detector D to detect an order-of-magnitude slower worm, at false positive rates less than a few per week, than would be possible using D alone at the end-host or on a network aggregation point. We show that this general architecture is scalable in the sense that a fixed absolute false positive rate can be achieved as the network size grows, spreads communication bandwidth uniformly throughout the network, and makes use of the increased computation power of a distributed system. We argue that using probabilistic models provides more robust detections than previous collaborative counting schemes and allows the system to account for heterogeneous detectors in a principled fashion. Intrusion Detection Worms pose an increasingly serious threat to network security. With known worms estimated at reaching peak speeds of 23K connections per second, and theoretical analysis citing higher speeds, the entire Internet risks infection within tens of minutes As the methods to detect worms become increasingly sophisticated, the worm designers react by making worms harder to detect and stop. Worms released over the past year have tended to the extremes: getting either much faster to allow rapid spread, or much slower to prevent detection. The latter approach places an increasing burden on detection methods to effectively pick out and isolate worm traffic from the baseline created by normal traffic seen at a host. While the slower rate does offer some respite to the network operator(s) (if detected, the worms can be contained with relatively little collateral damage), the detection is extremely challenging due to the fact that slow worms can hide under the veil of normal traffic. Although locally a worm may be propagating very slowly, if it can manage to reproduce more than once before being detected on a local host, it will still grow at an exponential rate. Yet another challenge in dealing with worms is that individual entities can only see a partial picture of the larger network wide behavior of the worm(s). IDSs deployed in select networks might not see any worm traffic for a long time and perhaps see it only when it is too late. Collaboration is seen as a way to remedy this; systems that allow multiple IDSs to share information have been shown to provide greater "coverage" in detection In this paper, we describe an approach to host-based IDS using distributed probabilistic inference. The starting point in our work is a set of weak host-based IDSs, referred to as local detectors (LDs), distributed throughout the network. We allow the hosts to collaborate and combine their weak information in a novel way to mitigate the effect of the high false positive rate. LDs raise alarms at a relatively high frequency whenever they detect even a remotely plausible anomaly. Alarms spreading in the network are aggregated by global detectors (GDs) to determine if the network, as a whole, is in an anomalous state, e.g., under attack. A similar system of distributed Bayesian network-based intrusion detection is presented by Our main contribution in this paper is a probabilistic framework that aggregates (local) beliefs to perform network-wide inference. Our primary findings are: • We can detect an order-of-magnitude slower worm than could be detected by using LDs alone at a FP rate of one per week. 2 • Our framework shows good scalability properties in the sense that we achieve a fixed false positive rate for the system, independent of the network size. • Our probabilistic model outperforms previous collaborative counting schemes and allows the system to account for heterogeneous detectors in a principled fashion. While the methods we describe are quite general and applicable in a wide variety of network settings, our empirical results operate over a subset of the Intel enterprise network. In the following sections, we describe the architecture of our system, discussing the advantages and disadvantages of the many design points, and we present empirical results that 1 For instance, a system may employ a range of detectors, or some detectors may be more trusted than others. 2 By contrast, the Intel network operations center typically investigates 2-3 false positives each day. demonstrate several of the advantages to the system we propose. Architectural Model In answer to the challenges posed in the previous section, we propose a system composed of three primary subcomponents, shown in The LDs live at the end-hosts and are designed to be weak but general classifiers which collect information and make "noisy" conclusions about anomalies at the host-level. This design serves several purposes: 1. Analysis of network traffic at the host level compares the weak signal to a much smaller background noise-level, so can boost the signal-to-noise ratio by orders of magnitude compared to an IDS that operates within the network. 2. Host-based detectors can make use of a richer set of data, possibly using application data from the host as input into the local classifier. 3. This system adds computational power to the detection problem by massively distributing computations across the end-hosts. An important design decision of this system is where to place the GDs in the network, and this decision goes hand-in-hand with the design of the ISS. There are at least two possibilities: centralized placement The ISS uses a network protocol to communicate state between the LDs and the GDs. For the purposes of this paper we assume that each LD communicates its state by beaconing to a random set of GDs at regularly spaced epochs. There are many important and interesting research questions about what an ideal ISS should look like. For example, messages could be aggregated from host-to-host to allow exponential spreading of information. However it is beyond the scope of this paper to deal with this issue in depth. Here we assume that no message aggregation is taking place, each LD relays its own state to M hosts, chosen at random each epoch. In the following sections, we examine in detail the LDs and GDs used by our system. The Local Detectors For the purposes of this paper, we define a LD as a binary classifier that sits on the local host and uses information about the local state or local traffic patterns to classify the state of the host as normal or abnormal. We assume that the LDs are weak in the sense that they may have a high falsepositive rate, but are general, so are likely to fire for a broad range of anomalous behavior. In the context of intrusiondetection systems, because of the high volume of traffic in modern networks, what may appear to be a relatively small FP rate could by itself result in an unacceptable level of interruptions, so could be classified as weak. The LD implementation we use in this paper is a heuristicbased detector that analyzes outgoing traffic and counts the number of new outgoing connections to unique destination addresses and outgoing ports; alerts are raised when this number crosses a configured threshold. Background traffic connection rate distributio

    Имитационное моделирование механизмов защиты компьютерных сетей от инфраструктурных атак на основе подхода “нервная система сети”

    Get PDF
    The paper considers an analysis of a protection mechanism against infrastructure attacks based on the bio-inspired approach ―nervous network system‖. We propose to use a network packet-level simulation to investigate the protection mechanism ―nervous network system‖. The paper presents the structure of the protection mechanism, the algorithms of its functioning, and the results of the experiments. Basing on the experimental data, we analyze the effectiveness of the proposed protection mechanism.Статья посвящена анализу механизма защиты компьютерных сетей от инфраструктурных атак на основе биоинспирированного подхода ―нервная система сети‖. В работе предлагается использование имитационного моделирования на уровне сетевых пакетов для исследования механизма защиты ―нервная система сети‖. Описывается архитектура системы защиты, реализующей данный механизм защиты, и алгоритмы его работы, представляются результаты экспериментов. На основе полученных экспериментальных данных проводится анализ эффективности предлагаемого механизма защиты

    Detecting worm mutations using machine learning

    Get PDF
    Worms are malicious programs that spread over the Internet without human intervention. Since worms generally spread faster than humans can respond, the only viable defence is to automate their detection. Network intrusion detection systems typically detect worms by examining packet or flow logs for known signatures. Not only does this approach mean that new worms cannot be detected until the corresponding signatures are created, but that mutations of known worms will remain undetected because each mutation will usually have a different signature. The intuitive and seemingly most effective solution is to write more generic signatures, but this has been found to increase false alarm rates and is thus impractical. This dissertation investigates the feasibility of using machine learning to automatically detect mutations of known worms. First, it investigates whether Support Vector Machines can detect mutations of known worms. Support Vector Machines have been shown to be well suited to pattern recognition tasks such as text categorisation and hand-written digit recognition. Since detecting worms is effectively a pattern recognition problem, this work investigates how well Support Vector Machines perform at this task. The second part of this dissertation compares Support Vector Machines to other machine learning techniques in detecting worm mutations. Gaussian Processes, unlike Support Vector Machines, automatically return confidence values as part of their result. Since confidence values can be used to reduce false alarm rates, this dissertation determines how Gaussian Process compare to Support Vector Machines in terms of detection accuracy. For further comparison, this work also compares Support Vector Machines to K-nearest neighbours, known for its simplicity and solid results in other domains. The third part of this dissertation investigates the automatic generation of training data. Classifier accuracy depends on good quality training data -- the wider the training data spectrum, the higher the classifier's accuracy. This dissertation describes the design and implementation of a worm mutation generator whose output is fed to the machine learning techniques as training data. This dissertation then evaluates whether the training data can be used to train classifiers of sufficiently high quality to detect worm mutations. The findings of this work demonstrate that Support Vector Machines can be used to detect worm mutations, and that the optimal configuration for detection of worm mutations is to use a linear kernel with unnormalised bi-gram frequency counts. Moreover, the results show that Gaussian Processes and Support Vector Machines exhibit similar accuracy on average in detecting worm mutations, while K-nearest neighbours consistently produces lower quality predictions. The generated worm mutations are shown to be of sufficiently high quality to serve as training data. Combined, the results demonstrate that machine learning is capable of accurately detecting mutations of known worms
    corecore