7 research outputs found

    Reinforced Intrusion Detection Using Pursuit Reinforcement Competitive Learning

    Get PDF
    Today, information technology is growing rapidly,all information can be obtainedmuch easier. It raises some new problems; one of them is unauthorized access to the system. We need a reliable network security system that is resistant to a variety of attacks against the system. Therefore, Intrusion Detection System (IDS) required to overcome the problems of intrusions. Many researches have been done on intrusion detection using classification methods. Classification methodshave high precision, but it takes efforts to determine an appropriate classification model to the classification problem. In this paper, we propose a new reinforced approach to detect intrusion with On-line Clustering using Reinforcement Learning. Reinforcement Learning is a new paradigm in machine learning which involves interaction with the environment.It works with reward and punishment mechanism to achieve solution. We apply the Reinforcement Learning to the intrusion detection problem with considering competitive learning using Pursuit Reinforcement Competitive Learning (PRCL). Based on the experimental result, PRCL can detect intrusions in real time with high accuracy (99.816% for DoS, 95.015% for Probe, 94.731% for R2L and 99.373% for U2R) and high speed (44 ms).The proposed approach can help network administrators to detect intrusion, so the computer network security systembecome reliable.Keywords: Intrusion Detection System, On-Line Clustering, Reinforcement Learning, Unsupervised Learning

    An anomaly-based IDS framework using centroid-based classification

    Get PDF
    Botnet is an urgent problem that will reduce the security and availability of the network. When the bot master launches attacks to certain victims, the infected users are awakened, and attacks start according to the commands from the bot master. Via Botnet, DDoS is an attack whose purpose is to paralyze the victim’s service. In all kinds of DDoS, SYN flood is still a problem that reduces security and availability. To enhance the security of the Internet, IDS is proposed to detect attacks and protect the server. In this paper, the concept of centroid-based classification is used to enhance performance of the framework. An anomaly-based IDS framework which combines K-means and KNN is proposed to detect SYN flood. Dimension reduction is designed to achieve visualization, and weights can adjust the occupancy ratio of each sub-feature. Therefore, this framework is also suitable for use on the modern symmetry or asymmetry architecture of information systems. With the detection by the framework proposed in this paper, the detection rate is 96.8 percent, the accuracy rate is 97.3 percent, and the false alarm rate is 1.37 percent

    A hybrid approach for alarm verification using stream processing, machine learning and text analytics

    Get PDF
    False alarms triggered by security sensors incur high costs for all parties involved. According to police reports, a large majority of alarms are false. Recent advances in machine learning can enable automatically classifying alarms. However, building a scalable alarm verification system is a challenge, since the system needs to: (1) process thousands of alarms in real-time, (2) classify false alarms with high accuracy and (3) perform historic data analysis to enable better insights into the results for human operators. This requires a mix of machine learning, stream and batch processing – technologies which are typically optimized independently. We combine all three into a single, real-world application. This paper describes the implementation and evaluation of an alarm verification system we developed jointly with Sitasys, the market leader in alarm transmission in central Europe. Our system can process around 30K alarms per second with a verification accuracy of above 90%

    Towards An efficient unsupervised feature selection methods for high-dimensional data

    Get PDF
    With the proliferation of the data, the dimensions of data have increased significantly, producing what is known as high-dimensional data. This increase of data dimensions results in redundant and non-representative features, which pose challenges to existing machine learning algorithms. Firstly, they add extra processing time to the learning algorithms and therefore negatively affect their performance/running time. Secondly, they reduce the accuracy of the learning algorithms by overfitting the data with these redundant and non-representative features. Lastly, they require greater storage capacity. This thesis is concerned with reducing the data dimensions for machine learning algorithms in order to improve their accuracy and run-time efficiently. The reduction is carried out by selecting a reduced set of representative and non-redundant features from the original feature space so it approximates the original feature space. Three research issues have been addressed to achieve the main aim of this thesis. The first research task addresses the issue of accurate selection of representative features from high-dimensional data. An efficient and accurate similarity-based unsupervised feature selection method (called AUFS) is proposed to tackle the issue of the high-dimensionality of data by selecting representative features without the need to use data class labels. The proposed AUFS method extends the k-mean clustering algorithm to partition the features into k clusters based on different similarity measures in order to accurately partition the features. Then, the proposed centroid-based feature selection method is used to accurately select those representative features. The second research task is intended to select representative features from streaming features applications where the number of features increases while the number of instances remains fixed. Streaming features applications pose challenges for feature selection methods. These dynamic features applications have the following characteristics: a) features are sequentially generated and are processed one by one upon their arrival while the number of instances/points remains fixed; and b) the complete feature space is not known in advance. A new method known as Unsupervised Feature Selection for Streaming Features (UFSSF), is proposed to select representative features considering these characteristics of streaming features applications. UFSSF further extends the k-mean clustering algorithm to incrementally decide whether to add the newly arrived feature to the existing set of representative features. Those features that are not representative are discarded. The last research task involves reducing the dimensionality of multi-view data where both the number of features and instances can increase over time. Multi-view learning provides complementary information for machine learning algorithms. However, it results in high-dimensionality as the data is being considered from different views. Indeed, extra views would definitely result in extra dimensions. In particular, existing solutions assume that the number of the views is static; however, this is not realistic when dealing with real applications as new views can be added. Therefore, an Onlline Unsupervised Feature Selection for Dynamic Views (OUDVFS) is proposed. As we are targeting unsupervised learning, we propose a new clustering-based feature selection method that incrementally clusters the views. The set of selected representative features is updated at each clustering step

    Anomaly-based Correlation of IDS Alarms

    Get PDF
    An Intrusion Detection System (IDS) is one of the major techniques for securing information systems and keeping pace with current and potential threats and vulnerabilities in computing systems. It is an indisputable fact that the art of detecting intrusions is still far from perfect, and IDSs tend to generate a large number of false IDS alarms. Hence human has to inevitably validate those alarms before any action can be taken. As IT infrastructure become larger and more complicated, the number of alarms that need to be reviewed can escalate rapidly, making this task very difficult to manage. The need for an automated correlation and reduction system is therefore very much evident. In addition, alarm correlation is valuable in providing the operators with a more condensed view of potential security issues within the network infrastructure. The thesis embraces a comprehensive evaluation of the problem of false alarms and a proposal for an automated alarm correlation system. A critical analysis of existing alarm correlation systems is presented along with a description of the need for an enhanced correlation system. The study concludes that whilst a large number of works had been carried out in improving correlation techniques, none of them were perfect. They either required an extensive level of domain knowledge from the human experts to effectively run the system or were unable to provide high level information of the false alerts for future tuning. The overall objective of the research has therefore been to establish an alarm correlation framework and system which enables the administrator to effectively group alerts from the same attack instance and subsequently reduce the volume of false alarms without the need of domain knowledge. The achievement of this aim has comprised the proposal of an attribute-based approach, which is used as a foundation to systematically develop an unsupervised-based two-stage correlation technique. From this formation, a novel SOM K-Means Alarm Reduction Tool (SMART) architecture has been modelled as the framework from which time and attribute-based aggregation technique is offered. The thesis describes the design and features of the proposed architecture, focusing upon the key components forming the underlying architecture, the alert attributes and the way they are processed and applied to correlate alerts. The architecture is strengthened by the development of a statistical tool, which offers a mean to perform results or alert analysis and comparison. The main concepts of the novel architecture are validated through the implementation of a prototype system. A series of experiments were conducted to assess the effectiveness of SMART in reducing false alarms. This aimed to prove the viability of implementing the system in a practical environment and that the study has provided appropriate contribution to knowledge in this field

    Systemarchitektur zur Ein- und Ausbruchserkennung in verschlüsselten Umgebungen

    Get PDF
    Das Internet hat sich mit einer beispiellosen Geschwindigkeit in den Lebensalltag integriert. Umfangreiche Dienste ermöglichen es, Bestellungen, finanzielle Transaktionen, etc. über das Netz durchzuführen. Auch traditionelle Dienste migrieren mehr und mehr in das Internet, wie bspw. Telefonie oder Fernsehen. Die finanziellen Werte, die hierbei umgesetzt werden, haben eine hohe Anziehungskraft auf Kriminelle: Angriffe im Internet sind aus einer sicheren Entfernung heraus möglich, unterschiedliches IT-Recht der verschiedenen Länder erschwert die grenzüberschreitende Strafverfolgung zusätzlich. Entsprechend hat sich in den letzten Jahren ein milliardenschwerer Untergrundmarkt im Internet etabliert. Um Systeme und Netze vor Angriffen zu schützen, befinden sich seit über 30 Jahren Verfahren zur Einbruchsdetektion in der Erforschung. Zahlreiche Systeme sind auf dem Markt verfügbar und gehören heute zu den Sicherheitsmechanismen jedes größeren Netzes. Trotz dieser Anstrengungen nimmt die Zahl von Sicherheitsvorfällen nicht ab, sondern steigt weiterhin an. Heutige Systeme sind nicht in der Lage, mit Herausforderungen wie zielgerichteten Angriffen, verschlüsselten Datenleitungen oder Innentätern umzugehen. Der Beitrag der vorliegenden Dissertation ist die Entwicklung einer Architektur zur Ein- und Ausbruchserkennung in verschlüsselten Umgebungen. Diese beinhaltet sowohl Komponenten zur Erkennung von extern durchgeführten Angriffen, als auch zur Identifikation von Innentätern. Hierbei werden statistische Methoden auf Basis einer verhaltensbasierten Detektion genutzt, so dass keine Entschlüsselung des Datenverkehrs erforderlich ist. Im Gegensatz zu bisherigen Methoden benötigt das System hierbei keine Lernphasen. Ausgehend von einem Szenario der IT-Struktur heutiger Unternehmen werden die Anforderungen an ein System zur Ein- und Ausbruchserkennung definiert. Da eine Detektion die Kenntnis entsprechender, messbarer Ansatzpunkte benötigt, erfolgt eine detaillierte Analyse einer Angriffsdurchführung. Auf dieser Basis sowie den Ergebnissen der Untersuchung des State-of-the-Art im Bereich der Ein- und Ausbruchserkennung wird identifiziert, warum heutige Systeme nicht in der Lage sind, ausgefeilte Angriffe zur erkennen. Anhand einer Betrachtung, welche Parameter bei verschlüsselten Verbindungen für eine Evaluation noch zur Verfügung stehen, werden Möglichkeiten zur Detektion von bösartigem Verhalten entwickelt. Hierauf basierend wird eine neue Architektur mit mehreren Teilmodulen zur Ein- und Ausbruchserkennung in verschlüsselten Umgebungen vorgestellt. Eine anschließende Evaluation zeigt die Funktionsfähigkeit der vorgestellten Architektur.The evolution of the Internet took place with a matchless speed over the past decades. Today, the Internet is established in numerous areas of everyday life. Purchase orders or money transfers are only two examples. The financial values which are moved over the Internet are alluring criminals. Attacks with the aid of the Internet can be executed from a safe distance, different IT laws of the countries hamper the transboundary criminal prosecution. Therefore, an underground market worth billions has been established over the past years. For the protection of systems and networks, procedures for intrusion detection are under development for more than 30 years. Numerous systems are available and are integral security components in every bigger network today. However, even with these strong efforts, the number of security incidents is steadily increasing. Todays systems are not able to cope with challenges like targeted attacks, encrypted communication and connections or the insider threat. The contribution of this thesis is the design and development of an architecture for intrusion and extrusion detection in encrypted environments. The architecture consists of components for the detection of external attacks as well as the identification of insiders. Statistical methods and behavior-based techniques are used, therefore there is no need for a deciphering of the data traffic. In contrast to existing approaches, the proposed architecture does not need a learning phase. Based on a scenario of the IT infrastructure of a company, the requirements for a system for intrusion and extrusion detection are defined. Because measuring points must be located for being able to carry out a detection, an in-depth analysis of the execution of an attack is done. Afterwards, reasons for the failure of the detection of sophisticated attacks by current systems are identified based on an evaluation of the State-of-the-Art of in- and extrusion detection. An examination of encrypted network connections is used to deduce parameters which are still available after an encryption and which can be used for a detection of malicious behavior. The identified starting points are used for the development of a new architecture consisting of multiple modules for intrusion as well as extrusion detection in encrypted environments. A subsequent evaluation verifies the efficiency of the proposed architecture
    corecore