2,183 research outputs found

    NeuDetect: A neural network data mining system for wireless network intrusion detection

    Get PDF
    This thesis proposes an Intrusion Detection System, NeuDetect, which applies Neural Network technique to wireless network packets captured through hardware sensors for purposes of real time detection of anomalous packets. To address the problem of high false alarm rate confronted by the current wireless intrusion detection systems, this thesis presents a method of applying the artificial neural networks technique to the wireless network intrusion detection system. The proposed system solution approach is to find normal and anomalous patterns on preprocessed wireless packet records by comparing them with training data using Back-propagation algorithm. An anomaly score is assigned to each packet by calculating the difference between the output error and threshold. If the anomaly score is positive then the wireless packet is flagged as anomalous and is negative then the packet is flagged as normal. If the anomaly score is zero or close to zero it will be flagged as an unknown attack and will be sent back to training process for re-evaluation

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Intrusion Detection System: A Survey Using Data Mining and Learning Methods

    Get PDF
    In spite of growing information system widely, security has remained one hard-hitting area for computers as well as networks. In information protection, Intrusion Detection System (IDS) is used to safeguard the data confidentiality, integrity and system availability from various types of attacks. Data mining is an efficient artifice applied to intrusion detection to ascertain a new outline from the massive network data as well as it used to reduce the strain of the manual compilations of the normal and abnormal behavior patterns. Intrusion Detection System (IDS) is an essential method to protect network security from incoming on-line threats. Machine learning enable automates the classification of network patterns. This piece of writing reviews the present state of data mining techniques and compares various data mining techniques used to implement an intrusion detection system such as, Support Vector Machine, Genetic Algorithm, Neural network, Fuzzy Logic, Bayesian Classifier, K- Nearest Neighbor and decision tree Algorithms by highlighting a advantage and disadvantages of each of the techniques. This paper review the learning and detection methods in IDS, discuss the problems with existing intrusion detection systems and review data reduction techniques used in IDS in order to deal with huge volumes of audit data. Finally, conclusion and recommendation are included. Keywords: Classification, Data Mining, Intrusion Detection System, Security, Anomaly Detection, Types of attacks, Machine Learning Technique

    New Methods for Network Traffic Anomaly Detection

    Get PDF
    In this thesis we examine the efficacy of applying outlier detection techniques to understand the behaviour of anomalies in communication network traffic. We have identified several shortcomings. Our most finding is that known techniques either focus on characterizing the spatial or temporal behaviour of traffic but rarely both. For example DoS attacks are anomalies which violate temporal patterns while port scans violate the spatial equilibrium of network traffic. To address this observed weakness we have designed a new method for outlier detection based spectral decomposition of the Hankel matrix. The Hankel matrix is spatio-temporal correlation matrix and has been used in many other domains including climate data analysis and econometrics. Using our approach we can seamlessly integrate the discovery of both spatial and temporal anomalies. Comparison with other state of the art methods in the networks community confirms that our approach can discover both DoS and port scan attacks. The spectral decomposition of the Hankel matrix is closely tied to the problem of inference in Linear Dynamical Systems (LDS). We introduce a new problem, the Online Selective Anomaly Detection (OSAD) problem, to model the situation where the objective is to report new anomalies in the system and suppress know faults. For example, in the network setting an operator may be interested in triggering an alarm for malicious attacks but not on faults caused by equipment failure. In order to solve OSAD we combine techniques from machine learning and control theory in a unique fashion. Machine Learning ideas are used to learn the parameters of an underlying data generating system. Control theory techniques are used to model the feedback and modify the residual generated by the data generating state model. Experiments on synthetic and real data sets confirm that the OSAD problem captures a general scenario and tightly integrates machine learning and control theory to solve a practical problem

    Performance Evaluation of Network Anomaly Detection Systems

    Get PDF
    Nowadays, there is a huge and growing concern about security in information and communication technology (ICT) among the scientific community because any attack or anomaly in the network can greatly affect many domains such as national security, private data storage, social welfare, economic issues, and so on. Therefore, the anomaly detection domain is a broad research area, and many different techniques and approaches for this purpose have emerged through the years. Attacks, problems, and internal failures when not detected early may badly harm an entire Network system. Thus, this thesis presents an autonomous profile-based anomaly detection system based on the statistical method Principal Component Analysis (PCADS-AD). This approach creates a network profile called Digital Signature of Network Segment using Flow Analysis (DSNSF) that denotes the predicted normal behavior of a network traffic activity through historical data analysis. That digital signature is used as a threshold for volume anomaly detection to detect disparities in the normal traffic trend. The proposed system uses seven traffic flow attributes: Bits, Packets and Number of Flows to detect problems, and Source and Destination IP addresses and Ports, to provides the network administrator necessary information to solve them. Via evaluation techniques, addition of a different anomaly detection approach, and comparisons to other methods performed in this thesis using real network traffic data, results showed good traffic prediction by the DSNSF and encouraging false alarm generation and detection accuracy on the detection schema. The observed results seek to contribute to the advance of the state of the art in methods and strategies for anomaly detection that aim to surpass some challenges that emerge from the constant growth in complexity, speed and size of today’s large scale networks, also providing high-value results for a better detection in real time.Atualmente, existe uma enorme e crescente preocupação com segurança em tecnologia da informação e comunicação (TIC) entre a comunidade científica. Isto porque qualquer ataque ou anomalia na rede pode afetar a qualidade, interoperabilidade, disponibilidade, e integridade em muitos domínios, como segurança nacional, armazenamento de dados privados, bem-estar social, questões econômicas, e assim por diante. Portanto, a deteção de anomalias é uma ampla área de pesquisa, e muitas técnicas e abordagens diferentes para esse propósito surgiram ao longo dos anos. Ataques, problemas e falhas internas quando não detetados precocemente podem prejudicar gravemente todo um sistema de rede. Assim, esta Tese apresenta um sistema autônomo de deteção de anomalias baseado em perfil utilizando o método estatístico Análise de Componentes Principais (PCADS-AD). Essa abordagem cria um perfil de rede chamado Assinatura Digital do Segmento de Rede usando Análise de Fluxos (DSNSF) que denota o comportamento normal previsto de uma atividade de tráfego de rede por meio da análise de dados históricos. Essa assinatura digital é utilizada como um limiar para deteção de anomalia de volume e identificar disparidades na tendência de tráfego normal. O sistema proposto utiliza sete atributos de fluxo de tráfego: bits, pacotes e número de fluxos para detetar problemas, além de endereços IP e portas de origem e destino para fornecer ao administrador de rede as informações necessárias para resolvê-los. Por meio da utilização de métricas de avaliação, do acrescimento de uma abordagem de deteção distinta da proposta principal e comparações com outros métodos realizados nesta tese usando dados reais de tráfego de rede, os resultados mostraram boas previsões de tráfego pelo DSNSF e resultados encorajadores quanto a geração de alarmes falsos e precisão de deteção. Com os resultados observados nesta tese, este trabalho de doutoramento busca contribuir para o avanço do estado da arte em métodos e estratégias de deteção de anomalias, visando superar alguns desafios que emergem do constante crescimento em complexidade, velocidade e tamanho das redes de grande porte da atualidade, proporcionando também alta performance. Ainda, a baixa complexidade e agilidade do sistema proposto contribuem para que possa ser aplicado a deteção em tempo real

    IMPROVE - Innovative Modelling Approaches for Production Systems to Raise Validatable Efficiency

    Get PDF
    This open access work presents selected results from the European research and innovation project IMPROVE which yielded novel data-based solutions to enhance machine reliability and efficiency in the fields of simulation and optimization, condition monitoring, alarm management, and quality prediction

    Detection of Stock Price Manipulation Using Kernel Based Principal Component Analysis and Multivariate Density Estimation

    Get PDF
    Stock price manipulation uses illegitimate means to artificially influence market prices of several stocks. It causes massive losses and undermines investors’ confidence and the integrity of the stock market. Several existing research works focused on detecting a specific manipulation scheme using supervised learning but lacks the adaptive capability to capture different manipulative strategies. This begets the assumption of model parameter values specific to the underlying manipulation scheme. In addition, supervised learning requires the use of labelled data which is difficult to acquire due to confidentiality and the proprietary nature of trading data. The proposed research establishes a detection model based on unsupervised learning using Kernel Principal Component Analysis (KPCA) and applied increased variance of selected latent features in higher dimensions. A proposed Multidimensional Kernel Density Estimation (MKDE) clustering is then applied upon the selected components to identify abnormal patterns of manipulation in data. This research has an advantage over the existing methods in overcoming the ambiguity of assuming values of several parameters, reducing the high dimensions obtained from conventional KPCA and thereby reducing computational complexity. The robustness of the detection model has also been evaluated when two or more manipulative activities occur within a short duration of each other and by varying the window length of the dataset fed to the model. The results show a comprehensive assessment of the model on multiple datasets and a significant performance enhancement in terms of the F-measure values with a significant reduction in false alarm rate (FAR) has been achieved

    Supporting Telecommunication Alarm Management System with Trouble Ticket Prediction

    Get PDF
    Fault alarm data emanated from heterogeneous telecommunication network services and infrastructures are exploding with network expansions. Managing and tracking the alarms with Trouble Tickets using manual or expert rule- based methods has become challenging due to increase in the complexity of Alarm Management Systems and demand for deployment of highly trained experts. As the size and complexity of networks hike immensely, identifying semantically identical alarms, generated from heterogeneous network elements from diverse vendors, with data-driven methodologies has become imperative to enhance efficiency. In this paper, a data-driven Trouble Ticket prediction models are proposed to leverage Alarm Management Systems. To improve performance, feature extraction, using a sliding time-window and feature engineering, from related history alarm streams is also introduced. The models were trained and validated with a data-set provided by the largest telecommunication provider in Italy. The experimental results showed the promising efficacy of the proposed approach in suppressing false positive alarms with Trouble Ticket prediction
    • …
    corecore