33,633 research outputs found

    A machine learning approach with verification of predictions and assisted supervision for a rule-based network intrusion detection system

    Get PDF
    Network security is a branch of network management in which network intrusion detection systems provide attack detection features by monitorization of traffic data. Rule-based misuse detection systems use a set of rules or signatures to detect attacks that exploit a particular vulnerability. These rules have to be handcoded by experts to properly identify vulnerabilities, which results in misuse detection systems having limited extensibility. This paper proposes a machine learning layer on top of a rule-based misuse detection system that provides automatic generation of detection rules, prediction verification and assisted classification of new data. Our system offers an overall good performance, while adding an heuristic and adaptive approach to existing rule-based misuse detection systems

    Biologically Inspired Approaches to Automated Feature Extraction and Target Recognition

    Full text link
    Ongoing research at Boston University has produced computational models of biological vision and learning that embody a growing corpus of scientific data and predictions. Vision models perform long-range grouping and figure/ground segmentation, and memory models create attentionally controlled recognition codes that intrinsically cornbine botton-up activation and top-down learned expectations. These two streams of research form the foundation of novel dynamically integrated systems for image understanding. Simulations using multispectral images illustrate road completion across occlusions in a cluttered scene and information fusion from incorrect labels that are simultaneously inconsistent and correct. The CNS Vision and Technology Labs (cns.bu.edulvisionlab and cns.bu.edu/techlab) are further integrating science and technology through analysis, testing, and development of cognitive and neural models for large-scale applications, complemented by software specification and code distribution.Air Force Office of Scientific Research (F40620-01-1-0423); National Geographic-Intelligence Agency (NMA 201-001-1-2016); National Science Foundation (SBE-0354378; BCS-0235298); Office of Naval Research (N00014-01-1-0624); National Geospatial-Intelligence Agency and the National Society of Siegfried Martens (NMA 501-03-1-2030, DGE-0221680); Department of Homeland Security graduate fellowshi

    Self-organizing maps in computer security

    Get PDF

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Pruning GHSOM to create an explainable intrusion detection system

    Get PDF
    Intrusion Detection Systems (IDS) that provide high detection rates but are black boxes leadto models that make predictions a security analyst cannot understand. Self-Organizing Maps(SOMs) have been used to predict intrusion to a network, while also explaining predictions throughvisualization and identifying significant features. However, they have not been able to compete withthe detection rates of black box models. Growing Hierarchical Self-Organizing Maps (GHSOMs)have been used to obtain high detection rates on the NSL-KDD and CIC-IDS-2017 network trafficdatasets, but they neglect creating explanations or visualizations, which results in another blackbox model.This paper offers a high accuracy, Explainable Artificial Intelligence (XAI) based on GHSOMs.One obstacle to creating a white box hierarchical model is the model growing too large and complexto understand. Another contribution this paper makes is a pruning method used to cut down onthe size of the GHSOM, which provides a model that can provide insights and explanation whilemaintaining a high detection rate

    Self-organizing maps in computer security

    Get PDF

    Tracking English and Translated Arabic News using GHSOM

    Get PDF

    Explainable Intrusion Detection Systems using white box techniques

    Get PDF
    Artificial Intelligence (AI) has found increasing application in various domains, revolutionizing problem-solving and data analysis. However, in decision-sensitive areas like Intrusion Detection Systems (IDS), trust and reliability are vital, posing challenges for traditional black box AI systems. These black box IDS, while accurate, lack transparency, making it difficult to understand the reasons behind their decisions. This dissertation explores the concept of eXplainable Intrusion Detection Systems (X-IDS), addressing the issue of trust in X-IDS. It explores the limitations of common black box IDS and the complexities of explainability methods, leading to the fundamental question of trusting explanations generated by black box explainer modules. To address these challenges, this dissertation presents the concept of white box explanations, which are innately explainable. While white box algorithms are typically simpler and more interpretable, they often sacrifice accuracy. However, this work utilized white box Competitive Learning (CL), which can achieve competitive accuracy in comparison to black box IDS. We introduce Rule Extraction (RE) as another white box technique that can be applied to explain black box IDS. It involves training decision trees on the inputs, weights, and outputs of black box models, resulting in human-readable rulesets that serve as global model explanations. These white box techniques offer the benefits of accuracy and trustworthiness, which are challenging to achieve simultaneously. This work aims to address gaps in the existing literature, including the need for highly accurate white box IDS, a methodology for understanding explanations, small testing datasets, and comparisons between white box and black box models. To achieve these goals, the study employs CL and eclectic RE algorithms. CL models offer innate explainability and high accuracy in IDS applications, while eclectic RE enhances trustworthiness. The contributions of this dissertation include a novel X-IDS architecture featuring Self-Organizing Map (SOM) models that adhere to DARPA’s guidelines for explainable systems, an extended X-IDS architecture incorporating three CL-based algorithms, and a hybrid X-IDS architecture combining a Deep Neural Network (DNN) predictor with a white box eclectic RE explainer. These architectures create more explainable, trustworthy, and accurate X-IDS systems, paving the way for enhanced AI solutions in decision-sensitive domains
    corecore