34,001 research outputs found

    Performance Metrics for Network Intrusion Systems

    Get PDF
    Intrusion systems have been the subject of considerable research during the past 33 years, since the original work of Anderson. Much has been published attempting to improve their performance using advanced data processing techniques including neural nets, statistical pattern recognition and genetic algorithms. Whilst some significant improvements have been achieved they are often the result of assumptions that are difficult to justify and comparing performance between different research groups is difficult. The thesis develops a new approach to defining performance focussed on comparing intrusion systems and technologies. A new taxonomy is proposed in which the type of output and the data scale over which an intrusion system operates is used for classification. The inconsistencies and inadequacies of existing definitions of detection are examined and five new intrusion levels are proposed from analogy with other detection-based technologies. These levels are known as detection, recognition, identification, confirmation and prosecution, each representing an increase in the information output from, and functionality of, the intrusion system. These levels are contrasted over four physical data scales, from application/host through to enterprise networks, introducing and developing the concept of a footprint as a pictorial representation of the scope of an intrusion system. An intrusion is now defined as “an activity that leads to the violation of the security policy of a computer system”. Five different intrusion technologies are illustrated using the footprint with current challenges also shown to stimulate further research. Integrity in the presence of mixed trust data streams at the highest intrusion level is identified as particularly challenging. Two metrics new to intrusion systems are defined to quantify performance and further aid comparison. Sensitivity is introduced to define basic detectability of an attack in terms of a single parameter, rather than the usual four currently in use. Selectivity is used to describe the ability of an intrusion system to discriminate between attack types. These metrics are quantified experimentally for network intrusion using the DARPA 1999 dataset and SNORT. Only nine of the 58 attack types present were detected with sensitivities in excess of 12dB indicating that detection performance of the attack types present in this dataset remains a challenge. The measured selectivity was also poor indicting that only three of the attack types could be confidently distinguished. The highest value of selectivity was 3.52, significantly lower than the theoretical limit of 5.83 for the evaluated system. Options for improving selectivity and sensitivity through additional measurements are examined.Stochastic Systems Lt

    Dueling-HMM Analysis on Masquerade Detection

    Get PDF
    Masquerade detection is the ability to detect attackers known as masqueraders that intrude on another user’s system and pose as legitimate users. Once a masquerader obtains access to a user’s system, the masquerader has free reign over whatever data is on that system. In this research, we focus on masquerade detection and user classi cation using the following two di erent approaches: the heavy hitter approach and 2 di erent approaches based on hidden Markov models (HMMs), the dueling-HMM and threshold-HMM strategies. The heavy hitter approach computes the frequent elements seen in the training data sequence and test data sequence and computes the distance to see whether the test data sequence is masqueraded or not. The results show very misleading classi cations, suggesting that the approach is not viable for masquerade detection. A hidden Markov model is a tool for representing probability distributions over sequences of observations [9]. Previous research has shown that using a threshold-based hidden Markov model (HMM) approach is successful in a variety of categories: malware detection, intrusion detection, pattern recognition, etc. We have veri ed that using a threshold-based HMM approach produces high accuracy with low amounts of a false positives. Using the dueling- HMM approach, which utilizes multiple training HMMs, we obtain an overall accuracy of 81.96%. With the introduction of the bias in the dueling-HMM approach, we produce similar results to the results obtained in the threshold-based HMM approach, where we see many non-masqueraded data detected, while many masqueraded data avoid detection, yet still result in an high overall accuracy

    A Neural Network Approach for Intrusion Detection Systems

    Get PDF
    Intrusion detection systems, alongside firewalls and gateways, represent the first line of defense against computer network attacks. There are various commercial or open source intrusion detection systems in the market; nevertheless they do not perform well in various situations including novel attacks, user activity detection, generating in some cases false positive or negative alerts. The reason behind such performance is probably due to the implementation of merely signature based checks and a high degree of dependence on human interaction. On the other hand, a neural network approach might be the right one to tackle these issues. Neural networks have already been applied successfully to solve many problems related to pattern recognition, data mining, data compression and research is still underway with regards to intrusion detection systems. Unsupervised learning and fast network convergence are some features that can be integrated into an IDS system using neural networks. The networks can be designed to process a variety of data, although there are some constraints regarding input formatting. For this reason, data encoding represents a challenging task in the integration process since it needs to be optimised for the IDS domain. This paper will discuss the integration of IDS and neural networks, including data encoding and performance issues

    Predicting Network Attacks Using Ontology-Driven Inference

    Full text link
    Graph knowledge models and ontologies are very powerful modeling and re asoning tools. We propose an effective approach to model network attacks and attack prediction which plays important roles in security management. The goals of this study are: First we model network attacks, their prerequisites and consequences using knowledge representation methods in order to provide description logic reasoning and inference over attack domain concepts. And secondly, we propose an ontology-based system which predicts potential attacks using inference and observing information which provided by sensory inputs. We generate our ontology and evaluate corresponding methods using CAPEC, CWE, and CVE hierarchical datasets. Results from experiments show significant capability improvements comparing to traditional hierarchical and relational models. Proposed method also reduces false alarms and improves intrusion detection effectiveness.Comment: 9 page

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated
    • …
    corecore