80 research outputs found

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Improving intrusion detection systems using data mining techniques

    Get PDF
    Recent surveys and studies have shown that cyber-attacks have caused a lot of damage to organisations, governments, and individuals around the world. Although developments are constantly occurring in the computer security field, cyber-attacks still cause damage as they are developed and evolved by hackers. This research looked at some industrial challenges in the intrusion detection area. The research identified two main challenges; the first one is that signature-based intrusion detection systems such as SNORT lack the capability of detecting attacks with new signatures without human intervention. The other challenge is related to multi-stage attack detection, it has been found that signature-based is not efficient in this area. The novelty in this research is presented through developing methodologies tackling the mentioned challenges. The first challenge was handled by developing a multi-layer classification methodology. The first layer is based on decision tree, while the second layer is a hybrid module that uses two data mining techniques; neural network, and fuzzy logic. The second layer will try to detect new attacks in case the first one fails to detect. This system detects attacks with new signatures, and then updates the SNORT signature holder automatically, without any human intervention. The obtained results have shown that a high detection rate has been obtained with attacks having new signatures. However, it has been found that the false positive rate needs to be lowered. The second challenge was approached by evaluating IP information using fuzzy logic. This approach looks at the identity of participants in the traffic, rather than the sequence and contents of the traffic. The results have shown that this approach can help in predicting attacks at very early stages in some scenarios. However, it has been found that combining this approach with a different approach that looks at the sequence and contents of the traffic, such as event- correlation, will achieve a better performance than each approach individually

    A new unified intrusion anomaly detection in identifying unseen web attacks

    Get PDF
    The global usage of more sophisticated web-based application systems is obviously growing very rapidly. Major usage includes the storing and transporting of sensitive data over the Internet. The growth has consequently opened up a serious need for more secured network and application security protection devices. Security experts normally equip their databases with a large number of signatures to help in the detection of known web-based threats. In reality, it is almost impossible to keep updating the database with the newly identified web vulnerabilities. As such, new attacks are invisible. This research presents a novel approach of Intrusion Detection System (IDS) in detecting unknown attacks on web servers using the Unified Intrusion Anomaly Detection (UIAD) approach. The unified approach consists of three components (preprocessing, statistical analysis, and classification). Initially, the process starts with the removal of irrelevant and redundant features using a novel hybrid feature selection method. Thereafter, the process continues with the application of a statistical approach to identifying traffic abnormality. We performed Relative Percentage Ratio (RPR) coupled with Euclidean Distance Analysis (EDA) and the Chebyshev Inequality Theorem (CIT) to calculate the normality score and generate a finest threshold. Finally, Logitboost (LB) is employed alongside Random Forest (RF) as a weak classifier, with the aim of minimising the final false alarm rate. The experiment has demonstrated that our approach has successfully identified unknown attacks with greater than a 95% detection rate and less than a 1% false alarm rate for both the DARPA 1999 and the ISCX 2012 datasets

    Cybersecurity knowledge graphs

    Get PDF
    Cybersecurity knowledge graphs, which represent cyber-knowledge with a graph-based data model, provide holistic approaches for processing massive volumes of complex cybersecurity data derived from diverse sources. They can assist security analysts to obtain cyberthreat intelligence, achieve a high level of cyber-situational awareness, discover new cyber-knowledge, visualize networks, data flow, and attack paths, and understand data correlations by aggregating and fusing data. This paper reviews the most prominent graph-based data models used in this domain, along with knowledge organization systems that define concepts and properties utilized in formal cyber-knowledge representation for both background knowledge and specific expert knowledge about an actual system or attack. It is also discussed how cybersecurity knowledge graphs enable machine learning and facilitate automated reasoning over cyber-knowledge

    Cloud intrusion detection systems: fuzzy logic and classifications

    Get PDF
    Cloud Computing (CC), as defned by national Institute of Standards and Technology (NIST), is a new technology model for enabling convenient, on-demand network access to a shared pool of configurable computing resources such as networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service-provider interaction. CC is a fast growing field; yet, there are major concerns regarding the detection of security threats, which in turn have urged experts to explore solutions to improve its security performance through conventional approaches, such as, Intrusion Detection System (IDS). In the literature, there are two most successful current IDS tools that are used worldwide: Snort and Suricata; however, these tools are not flexible to the uncertainty of intrusions. The aim of this study is to explore novel approaches to uplift the CC security performance using Type-1 fuzzy logic (T1FL) technique with IDS when compared to IDS alone. All experiments in this thesis were performed within a virtual cloud that was built within an experimental environment. By combining fuzzy logic technique (FL System) with IDSs, namely SnortIDS and SuricataIDS, SnortIDS and SuricataIDS for detection systems were used twice (with and without FL) to create four detection systems (FL-SnortIDS, FL-SuricataIDS, SnortIDS, and SuricataIDS) using Intrusion Detection Evaluation Dataset (namely ISCX). ISCX comprised two types of traffic (normal and threats); the latter was classified into four classes including Denial of Service, User-to-Root, Root-to-Local, and Probing. Sensitivity, specificity, accuracy, false alarms and detection rate were compared among the four detection systems. Then, Fuzzy Intrusion Detection System model was designed (namely FIDSCC) in CC based on the results of the aforementioned four detection systems. The FIDSCC model comprised of two individual systems pre-and-post threat detecting systems (pre-TDS and post-TDS). The pre-TDS was designed based on the number of threats in the aforementioned classes to assess the detection rate (DR). Based on the output of this DR and false positives of the four detection systems, the post-TDS was designed in order to assess CC security performance. To assure the validity of the results, classifier algorithms (CAs) were introduced to each of the four detection systems and four threat classes for further comparison. The classifier algorithms were OneR, Naive Bayes, Decision Tree (DT), and K-nearest neighbour. The comparison was made based on specific measures including accuracy, incorrect classified instances, mean absolute error, false positive rate, precision, recall, and ROC area. The empirical results showed that FL-SnortIDS was superior to FL-SuricataIDS, SnortIDS, and SuricataIDS in terms of sensitivity. However, insignificant difference was found in specificity, false alarms and accuracy among the four detection systems. Furthermore, among the four CAs, the combination of FL-SnortIDS and DT was shown to be the best detection method. The results of these studies showed that FIDSCC model can provide a better alternative to detecting threats and reducing the false positive rates more than the other conventional approaches

    Cloud intrusion detection systems: fuzzy logic and classifications

    Get PDF
    Cloud Computing (CC), as defned by national Institute of Standards and Technology (NIST), is a new technology model for enabling convenient, on-demand network access to a shared pool of configurable computing resources such as networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service-provider interaction. CC is a fast growing field; yet, there are major concerns regarding the detection of security threats, which in turn have urged experts to explore solutions to improve its security performance through conventional approaches, such as, Intrusion Detection System (IDS). In the literature, there are two most successful current IDS tools that are used worldwide: Snort and Suricata; however, these tools are not flexible to the uncertainty of intrusions. The aim of this study is to explore novel approaches to uplift the CC security performance using Type-1 fuzzy logic (T1FL) technique with IDS when compared to IDS alone. All experiments in this thesis were performed within a virtual cloud that was built within an experimental environment. By combining fuzzy logic technique (FL System) with IDSs, namely SnortIDS and SuricataIDS, SnortIDS and SuricataIDS for detection systems were used twice (with and without FL) to create four detection systems (FL-SnortIDS, FL-SuricataIDS, SnortIDS, and SuricataIDS) using Intrusion Detection Evaluation Dataset (namely ISCX). ISCX comprised two types of traffic (normal and threats); the latter was classified into four classes including Denial of Service, User-to-Root, Root-to-Local, and Probing. Sensitivity, specificity, accuracy, false alarms and detection rate were compared among the four detection systems. Then, Fuzzy Intrusion Detection System model was designed (namely FIDSCC) in CC based on the results of the aforementioned four detection systems. The FIDSCC model comprised of two individual systems pre-and-post threat detecting systems (pre-TDS and post-TDS). The pre-TDS was designed based on the number of threats in the aforementioned classes to assess the detection rate (DR). Based on the output of this DR and false positives of the four detection systems, the post-TDS was designed in order to assess CC security performance. To assure the validity of the results, classifier algorithms (CAs) were introduced to each of the four detection systems and four threat classes for further comparison. The classifier algorithms were OneR, Naive Bayes, Decision Tree (DT), and K-nearest neighbour. The comparison was made based on specific measures including accuracy, incorrect classified instances, mean absolute error, false positive rate, precision, recall, and ROC area. The empirical results showed that FL-SnortIDS was superior to FL-SuricataIDS, SnortIDS, and SuricataIDS in terms of sensitivity. However, insignificant difference was found in specificity, false alarms and accuracy among the four detection systems. Furthermore, among the four CAs, the combination of FL-SnortIDS and DT was shown to be the best detection method. The results of these studies showed that FIDSCC model can provide a better alternative to detecting threats and reducing the false positive rates more than the other conventional approaches

    TOWARDS A HOLISTIC EFFICIENT STACKING ENSEMBLE INTRUSION DETECTION SYSTEM USING NEWLY GENERATED HETEROGENEOUS DATASETS

    Get PDF
    With the exponential growth of network-based applications globally, there has been a transformation in organizations\u27 business models. Furthermore, cost reduction of both computational devices and the internet have led people to become more technology dependent. Consequently, due to inordinate use of computer networks, new risks have emerged. Therefore, the process of improving the speed and accuracy of security mechanisms has become crucial.Although abundant new security tools have been developed, the rapid-growth of malicious activities continues to be a pressing issue, as their ever-evolving attacks continue to create severe threats to network security. Classical security techniquesfor instance, firewallsare used as a first line of defense against security problems but remain unable to detect internal intrusions or adequately provide security countermeasures. Thus, network administrators tend to rely predominantly on Intrusion Detection Systems to detect such network intrusive activities. Machine Learning is one of the practical approaches to intrusion detection that learns from data to differentiate between normal and malicious traffic. Although Machine Learning approaches are used frequently, an in-depth analysis of Machine Learning algorithms in the context of intrusion detection has received less attention in the literature.Moreover, adequate datasets are necessary to train and evaluate anomaly-based network intrusion detection systems. There exist a number of such datasetsas DARPA, KDDCUP, and NSL-KDDthat have been widely adopted by researchers to train and evaluate the performance of their proposed intrusion detection approaches. Based on several studies, many such datasets are outworn and unreliable to use. Furthermore, some of these datasets suffer from a lack of traffic diversity and volumes, do not cover the variety of attacks, have anonymized packet information and payload that cannot reflect the current trends, or lack feature set and metadata.This thesis provides a comprehensive analysis of some of the existing Machine Learning approaches for identifying network intrusions. Specifically, it analyzes the algorithms along various dimensionsnamely, feature selection, sensitivity to the hyper-parameter selection, and class imbalance problemsthat are inherent to intrusion detection. It also produces a new reliable dataset labeled Game Theory and Cyber Security (GTCS) that matches real-world criteria, contains normal and different classes of attacks, and reflects the current network traffic trends. The GTCS dataset is used to evaluate the performance of the different approaches, and a detailed experimental evaluation to summarize the effectiveness of each approach is presented. Finally, the thesis proposes an ensemble classifier model composed of multiple classifiers with different learning paradigms to address the issue of detection accuracy and false alarm rate in intrusion detection systems

    Using Botnet Technologies to Counteract Network Traffic Analysis

    Get PDF
    Botnets have been problematic for over a decade. They are used to launch malicious activities including DDoS (Distributed-Denial-of-Service), spamming, identity theft, unauthorized bitcoin mining and malware distribution. A recent nation-wide DDoS attacks caused by the Mirai botnet on 10/21/2016 involving 10s of millions of IP addresses took down Twitter, Spotify, Reddit, The New York Times, Pinterest, PayPal and other major websites. In response to take-down campaigns by security personnel, botmasters have developed technologies to evade detection. The most widely used evasion technique is DNS fast-flux, where the botmaster frequently changes the mapping between domain names and IP addresses of the C&C server so that it will be too late or too costly to trace the C&C server locations. Domain names generated with Domain Generation Algorithms (DGAs) are used as the \u27rendezvous\u27 points between botmasters and bots. This work focuses on how to apply botnet technologies (fast-flux and DGA) to counteract network traffic analysis, therefore protecting user privacy. A better understanding of botnet technologies also helps us be pro-active in defending against botnets. First, we proposed two new DGAs using hidden Markov models (HMMs) and Probabilistic Context-Free Grammars (PCFGs) which can evade current detection methods and systems. Also, we developed two HMM-based DGA detection methods that can detect the botnet DGA-generated domain names with/without training sets. This helps security personnel understand the botnet phenomenon and develop pro-active tools to detect botnets. Second, we developed a distributed proxy system using fast-flux to evade national censorship and surveillance. The goal is to help journalists, human right advocates and NGOs in West Africa to have a secure and free Internet. Then we developed a covert data transport protocol to transform arbitrary message into real DNS traffic. We encode the message into benign-looking domain names generated by an HMM, which represents the statistical features of legitimate domain names. This can be used to evade Deep Packet Inspection (DPI) and protect user privacy in a two-way communication. Both applications serve as examples of applying botnet technologies to legitimate use. Finally, we proposed a new protocol obfuscation technique by transforming arbitrary network protocol into another (Network Time Protocol and a video game protocol of Minecraft as examples) in terms of packet syntax and side-channel features (inter-packet delay and packet size). This research uses botnet technologies to help normal users have secure and private communications over the Internet. From our botnet research, we conclude that network traffic is a malleable and artificial construct. Although existing patterns are easy to detect and characterize, they are also subject to modification and mimicry. This means that we can construct transducers to make any communication pattern look like any other communication pattern. This is neither bad nor good for security. It is a fact that we need to accept and use as best we can

    Security Technologies and Methods for Advanced Cyber Threat Intelligence, Detection and Mitigation

    Get PDF
    The rapid growth of the Internet interconnectivity and complexity of communication systems has led us to a significant growth of cyberattacks globally often with severe and disastrous consequences. The swift development of more innovative and effective (cyber)security solutions and approaches are vital which can detect, mitigate and prevent from these serious consequences. Cybersecurity is gaining momentum and is scaling up in very many areas. This book builds on the experience of the Cyber-Trust EU project’s methods, use cases, technology development, testing and validation and extends into a broader science, lead IT industry market and applied research with practical cases. It offers new perspectives on advanced (cyber) security innovation (eco) systems covering key different perspectives. The book provides insights on new security technologies and methods for advanced cyber threat intelligence, detection and mitigation. We cover topics such as cyber-security and AI, cyber-threat intelligence, digital forensics, moving target defense, intrusion detection systems, post-quantum security, privacy and data protection, security visualization, smart contracts security, software security, blockchain, security architectures, system and data integrity, trust management systems, distributed systems security, dynamic risk management, privacy and ethics
    • …
    corecore