644 research outputs found

    Intrusion Detection System with Data Mining Approach: A Review

    Get PDF
    Despite of growing information technology widely, security has remained one challenging area for computers and networks. Recently many researchers have focused on intrusion detection system based on data mining techniques as an efficient strategy. The main problem in intrusion detection system is accuracy to detect new attacks therefore unsupervised methods should be applied. On the other hand, intrusion in system must be recognized in realtime, although, intrusion detection system is also helpful in off-line status for removing weaknesses of network2019;s security. However, data mining techniques can lead us to discover hidden information from network2019;s log data. In this survey, we try to clarify: first,the different problem definitions with regard to network intrusion detection generally; second, the specific difficulties encountered in this field of research; third, the varying assumptions, heuristics, and intuitions forming the basis of erent approaches; and how several prominent solutions tackle different problems

    Unsupervised Intrusion Detection with Cross-Domain Artificial Intelligence Methods

    Get PDF
    Cybercrime is a major concern for corporations, business owners, governments and citizens, and it continues to grow in spite of increasing investments in security and fraud prevention. The main challenges in this research field are: being able to detect unknown attacks, and reducing the false positive ratio. The aim of this research work was to target both problems by leveraging four artificial intelligence techniques. The first technique is a novel unsupervised learning method based on skip-gram modeling. It was designed, developed and tested against a public dataset with popular intrusion patterns. A high accuracy and a low false positive rate were achieved without prior knowledge of attack patterns. The second technique is a novel unsupervised learning method based on topic modeling. It was applied to three related domains (network attacks, payments fraud, IoT malware traffic). A high accuracy was achieved in the three scenarios, even though the malicious activity significantly differs from one domain to the other. The third technique is a novel unsupervised learning method based on deep autoencoders, with feature selection performed by a supervised method, random forest. Obtained results showed that this technique can outperform other similar techniques. The fourth technique is based on an MLP neural network, and is applied to alert reduction in fraud prevention. This method automates manual reviews previously done by human experts, without significantly impacting accuracy

    Feature Subset Selection in Intrusion Detection Using Soft Computing Techniques

    Get PDF
    Intrusions on computer network systems are major security issues these days. Therefore, it is of utmost importance to prevent such intrusions. The prevention of such intrusions is entirely dependent on their detection that is a main part of any security tool such as Intrusion Detection System (IDS), Intrusion Prevention System (IPS), Adaptive Security Alliance (ASA), checkpoints and firewalls. Therefore, accurate detection of network attack is imperative. A variety of intrusion detection approaches are available but the main problem is their performance, which can be enhanced by increasing the detection rates and reducing false positives. Such weaknesses of the existing techniques have motivated the research presented in this thesis. One of the weaknesses of the existing intrusion detection approaches is the usage of a raw dataset for classification but the classifier may get confused due to redundancy and hence may not classify correctly. To overcome this issue, Principal Component Analysis (PCA) has been employed to transform raw features into principal features space and select the features based on their sensitivity. The sensitivity is determined by the values of eigenvalues. The recent approaches use PCA to project features space to principal feature space and select features corresponding to the highest eigenvalues, but the features corresponding to the highest eigenvalues may not have the optimal sensitivity for the classifier due to ignoring many sensitive features. Instead of using traditional approach of selecting features with the highest eigenvalues such as PCA, this research applied a Genetic Algorithm (GA) to search the principal feature space that offers a subset of features with optimal sensitivity and the highest discriminatory power. Based on the selected features, the classification is performed. The Support Vector Machine (SVM) and Multilayer Perceptron (MLP) are used for classification purpose due to their proven ability in classification. This research work uses the Knowledge Discovery and Data mining (KDD) cup dataset, which is considered benchmark for evaluating security detection mechanisms. The performance of this approach was analyzed and compared with existing approaches. The results show that proposed method provides an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates

    Anomaly-based Correlation of IDS Alarms

    Get PDF
    An Intrusion Detection System (IDS) is one of the major techniques for securing information systems and keeping pace with current and potential threats and vulnerabilities in computing systems. It is an indisputable fact that the art of detecting intrusions is still far from perfect, and IDSs tend to generate a large number of false IDS alarms. Hence human has to inevitably validate those alarms before any action can be taken. As IT infrastructure become larger and more complicated, the number of alarms that need to be reviewed can escalate rapidly, making this task very difficult to manage. The need for an automated correlation and reduction system is therefore very much evident. In addition, alarm correlation is valuable in providing the operators with a more condensed view of potential security issues within the network infrastructure. The thesis embraces a comprehensive evaluation of the problem of false alarms and a proposal for an automated alarm correlation system. A critical analysis of existing alarm correlation systems is presented along with a description of the need for an enhanced correlation system. The study concludes that whilst a large number of works had been carried out in improving correlation techniques, none of them were perfect. They either required an extensive level of domain knowledge from the human experts to effectively run the system or were unable to provide high level information of the false alerts for future tuning. The overall objective of the research has therefore been to establish an alarm correlation framework and system which enables the administrator to effectively group alerts from the same attack instance and subsequently reduce the volume of false alarms without the need of domain knowledge. The achievement of this aim has comprised the proposal of an attribute-based approach, which is used as a foundation to systematically develop an unsupervised-based two-stage correlation technique. From this formation, a novel SOM K-Means Alarm Reduction Tool (SMART) architecture has been modelled as the framework from which time and attribute-based aggregation technique is offered. The thesis describes the design and features of the proposed architecture, focusing upon the key components forming the underlying architecture, the alert attributes and the way they are processed and applied to correlate alerts. The architecture is strengthened by the development of a statistical tool, which offers a mean to perform results or alert analysis and comparison. The main concepts of the novel architecture are validated through the implementation of a prototype system. A series of experiments were conducted to assess the effectiveness of SMART in reducing false alarms. This aimed to prove the viability of implementing the system in a practical environment and that the study has provided appropriate contribution to knowledge in this field

    TOWARDS A HOLISTIC EFFICIENT STACKING ENSEMBLE INTRUSION DETECTION SYSTEM USING NEWLY GENERATED HETEROGENEOUS DATASETS

    Get PDF
    With the exponential growth of network-based applications globally, there has been a transformation in organizations\u27 business models. Furthermore, cost reduction of both computational devices and the internet have led people to become more technology dependent. Consequently, due to inordinate use of computer networks, new risks have emerged. Therefore, the process of improving the speed and accuracy of security mechanisms has become crucial.Although abundant new security tools have been developed, the rapid-growth of malicious activities continues to be a pressing issue, as their ever-evolving attacks continue to create severe threats to network security. Classical security techniquesfor instance, firewallsare used as a first line of defense against security problems but remain unable to detect internal intrusions or adequately provide security countermeasures. Thus, network administrators tend to rely predominantly on Intrusion Detection Systems to detect such network intrusive activities. Machine Learning is one of the practical approaches to intrusion detection that learns from data to differentiate between normal and malicious traffic. Although Machine Learning approaches are used frequently, an in-depth analysis of Machine Learning algorithms in the context of intrusion detection has received less attention in the literature.Moreover, adequate datasets are necessary to train and evaluate anomaly-based network intrusion detection systems. There exist a number of such datasetsas DARPA, KDDCUP, and NSL-KDDthat have been widely adopted by researchers to train and evaluate the performance of their proposed intrusion detection approaches. Based on several studies, many such datasets are outworn and unreliable to use. Furthermore, some of these datasets suffer from a lack of traffic diversity and volumes, do not cover the variety of attacks, have anonymized packet information and payload that cannot reflect the current trends, or lack feature set and metadata.This thesis provides a comprehensive analysis of some of the existing Machine Learning approaches for identifying network intrusions. Specifically, it analyzes the algorithms along various dimensionsnamely, feature selection, sensitivity to the hyper-parameter selection, and class imbalance problemsthat are inherent to intrusion detection. It also produces a new reliable dataset labeled Game Theory and Cyber Security (GTCS) that matches real-world criteria, contains normal and different classes of attacks, and reflects the current network traffic trends. The GTCS dataset is used to evaluate the performance of the different approaches, and a detailed experimental evaluation to summarize the effectiveness of each approach is presented. Finally, the thesis proposes an ensemble classifier model composed of multiple classifiers with different learning paradigms to address the issue of detection accuracy and false alarm rate in intrusion detection systems

    Tiedonlouhinta televerkkojen lokien analysoinnin tukena

    Get PDF
    Telecommunications network management is based on huge amounts of data that are continuously collected from elements and devices from all around the network. The data is monitored and analysed to provide information for decision making in all operation functions. Knowledge discovery and data mining methods can support fast-pace decision making in network operations. In this thesis, I analyse decision making on different levels of network operations. I identify the requirements decision-making sets for knowledge discovery and data mining tools and methods, and I study resources that are available to them. I then propose two methods for augmenting and applying frequent sets to support everyday decision making. The proposed methods are Comprehensive Log Compression for log data summarisation and Queryable Log Compression for semantic compression of log data. Finally I suggest a model for a continuous knowledge discovery process and outline how it can be implemented and integrated to the existing network operations infrastructure.Tiedonlouhintamenetelmillä analysoidaan suuria tietomääriä, jotka on kerätty esimerkiksi vähittäiskaupan asiakkaista, televerkkojen laitteista, prosessiteollisuuden tuotantolaitoksista, tai erotettu geeneistä tai muista tutkitusta kohteista. Menetelmät havaitsevat tehokkaasti asioiden välisiä yhteyksiä kuten käyttäytymis- ja toimintamalleja ja poikkeamia niistä. Menetelmillä tuotettua tietoa käytetään liike-elämässä ja teollisuudessa toimintojen tehostamiseen sekä tieteessä uusien tutkimustulosten etsimiseen. Tiedonlouhinnan menetelmien ongelmana on niiden monimutkaisuus ja vaikeakäyttöisyys. Pystyäkseen käyttämään menetelmiä, tulee hallita niiden teoreettiset perusteet ja kyetä asettamaan kohdalleen useita kymmeniä tuloksiin vaikuttavia syötearvoja. Tämä on hankalaa käytännön tehtävissä, kuten televerkkojen valvonnassa, joissa seurattavat datamäärät ovat valtavia ja aikaa päätöksen tekoon on vähän: pikemminkin minuutteja kuin tunteja. Minkälaisia tiedonlouhintamenetelmien tulisi olla, jotta ne voitaisiin liittää esimerkiksi osaksi televerkon valvojan työkaluja? Selvittääkseni tiedonlouhintamenetelmille asetettavat vaatimukset tarkastelen väitöskirjassani päätöksentekoa televerkon operoinnin ja ylläpidon eri vaiheissa ja tasoilla. Luon päätöksenteosta mallin ja tarkastelen sitä tukevia tiedonlouhinnan tehtäviä ja niiden tarvitsemia lähtötietoja. Kuvaan teollisessa käyttöympäristössä saatavilla olevan asiantuntemuksen, resurssit ja työvälineet, joiden avulla tiedonlouhintamenetelmiä käytetään ja johdan vaatimuslistan päätöksenteon tukena käytettäville tiedonlouhintamenetelmille. Tutkimuksessani esittelen kaksi menetelmää laajojen tapahtumia sisältävien lokitietokantojen analysointiin. CLC-menetelmä luo ilman etukäteisoppimista tai -määritelmiä annetusta laajasta tapahtumajoukosta tiivistelmän havaitsemalla ja kuvaamalla usein samankaltaisina toistuvat tapahtumat ja tapahtumien ketjut. Menetelmä jättää lokiin asiantuntijan tarkasteltavaksi yksittäiset ja harvoin esiintyvät tapahtumat. QLC-menetelmää puolestaan käytetään lokien tiiviiseen tallennukseen. Sen avulla voidaan lokit tallentaa joissain tapauksissa kolmanneksen pienempään tilaan yleisesti käytettyihin tiivistysmenetelmiin verrattuna. Lisäksi QLC-menetelmän etuna on, että sen avulla tiivistettyihin lokitiedostoihin voidaan kohdistaa kyselyjä ilman, että tiivistystä täytyy erikseen ensin purkaa. Sekä CLC- että QLC-menetelmä täyttää hyvin havaitut tiedonlouhintamenetelmille asetetut vaatimukset. Tutkimuksen lopuksi esitän teollista päätöksentekoa tukevaa jatkuvaa tiedonlouhintaa kuvaavan prosessimallin ja hahmottelen kuinka tiedonlouhintamenetelmät ja -prosessi voidaan yhdistää yrityksen tietojärjestelmään. Olen käyttänyt televerkkojen ylläpitoa tutkimusympäristönä, mutta sekä havaitsemani tiedonlouhintamenetelmille asetettavat vaatimukset että kehittämäni menetelmät ovat sovellettavissa muissa vastaavissa ympäristöissä, joissa tarkkaillaan ja analysoidaan jatkuvaa lokitapahtumien virtaa. Näille ympäristöille on yhteistä, että niissä on jatkuvasti tehtävä päätöksiä, joita ei pystytä tapahtumien ja prosessin tilojen harvinaisuuden tai moniselitteisyyden takia automatisoimaan. Tällaisia ovat muun muassa tietoturvalokit, verkkopalvelujen käytön seuranta, teollisten prosessien ylläpito, sekä laajojen logistiikkapalveluiden seuranta
    corecore