5 research outputs found

    Creating an Explainable Intrusion Detection System Using Self Organizing Maps

    Full text link
    Modern Artificial Intelligence (AI) enabled Intrusion Detection Systems (IDS) are complex black boxes. This means that a security analyst will have little to no explanation or clarification on why an IDS model made a particular prediction. A potential solution to this problem is to research and develop Explainable Intrusion Detection Systems (X-IDS) based on current capabilities in Explainable Artificial Intelligence (XAI). In this paper, we create a Self Organizing Maps (SOMs) based X-IDS system that is capable of producing explanatory visualizations. We leverage SOM's explainability to create both global and local explanations. An analyst can use global explanations to get a general idea of how a particular IDS model computes predictions. Local explanations are generated for individual datapoints to explain why a certain prediction value was computed. Furthermore, our SOM based X-IDS was evaluated on both explanation generation and traditional accuracy tests using the NSL-KDD and the CIC-IDS-2017 datasets

    Peningkatan Performa Pengelompokan Siswa Berdasarkan Aktivitas Belajar pada Media Pembelajaran Digital Menggunakan Metode Adaptive Moving Self-Organizing Maps

    Get PDF
    Digitalisasi proses pembelajaran memungkinkan untuk dihasilkannya rekaman terhadap setiap aktivitas siswa selama belajar. Rekaman yang dihasilkan tersebut dapat digunakan untuk mengelompokkan siswa berdasarkan pola dari proses belajar yang dilakukan. Hasil pengelompokkan yang peroleh dapat digunakan untuk melakukan penyesuaian komponen pembelajaran ataupun metode pembelajaran bagi siswa. Salah satu metode pengelompokan yang sering digunakan adalah Self-Organizing Maps (SOM), SOM merupakan metode jaringan syaraf tiruan dengan tujuan untuk mempertahankan topologi data ketika data input multidimensi diubah menjadi data output dengan dimensi yang lebih rendah. Neuron SOM pada dimensi input diperbaharui sepanjang proses pelatihan, sedangkan neuron pada dimensi output tidak mendapatkan pembaruan sama sekali, hal ini menyebabkan struktur neuron yang digunakan pada tahapan inisialisasi akan tetap sama hingga akhir proses pengelompokan. Pada penelitian ini menggunakan metode Adaptive Moving Self-Organizing Maps (AMSOM) yang menggunakan struktur neuron lebih fleksibel, dengan dimungkinkannya terjadi perpindahan, penambahan dan penghapusan dari neuron menggunakan data 12 assignments dari media pembelajaran MONSAKUN. Hasil penelitian menunjukkan terdapat perbedaan yang signifikan secara statistik antara nilai quantization error dan nilai topographic error dari algoritme AMSOM dengan algoritme SOM. Metode AMSOM menghasilkan rata-rata nilai quantization error 27 kali lebih kecil dan rata-rata nilai topographic error 54 kali lebih kecil dibandingkan dengan metode SOM.AbstractThe digitization of the learning process makes it possible to produce recordings of each student's activity during learning. The resulting record can be used to group students based on the pattern of the learning process. The grouping results can be used to make adjustments to the learning components or learning methods for students. One of the most frequently used clustering methods is Self-Organizing Maps (SOM), SOM is a neural network method to maintain data topology when multidimensional input data is converted into output data with lower dimensions. The SOM neurons in the input dimension are updated throughout the training process, while the neurons in the output dimension do not get updated at all, this causes the neuron structure used in the initialization stage to remain the same until the end of the grouping process. In this study, the Adaptive Moving Self-Organizing Maps (AMSOM) method uses a more flexible neuron structure, allowing for the transfer, addition and deletion of neurons using 12 assignments of data from MONSAKUN learning media. The results showed that there was a statistically significant difference between the quantization error and the topographic error of the AMSOM algorithm and the SOM algorithm. The AMSOM method produces an average quantization error 27 times smaller and an average topographic error 54 times smaller than the SOM method

    Explainable Intrusion Detection Systems using white box techniques

    Get PDF
    Artificial Intelligence (AI) has found increasing application in various domains, revolutionizing problem-solving and data analysis. However, in decision-sensitive areas like Intrusion Detection Systems (IDS), trust and reliability are vital, posing challenges for traditional black box AI systems. These black box IDS, while accurate, lack transparency, making it difficult to understand the reasons behind their decisions. This dissertation explores the concept of eXplainable Intrusion Detection Systems (X-IDS), addressing the issue of trust in X-IDS. It explores the limitations of common black box IDS and the complexities of explainability methods, leading to the fundamental question of trusting explanations generated by black box explainer modules. To address these challenges, this dissertation presents the concept of white box explanations, which are innately explainable. While white box algorithms are typically simpler and more interpretable, they often sacrifice accuracy. However, this work utilized white box Competitive Learning (CL), which can achieve competitive accuracy in comparison to black box IDS. We introduce Rule Extraction (RE) as another white box technique that can be applied to explain black box IDS. It involves training decision trees on the inputs, weights, and outputs of black box models, resulting in human-readable rulesets that serve as global model explanations. These white box techniques offer the benefits of accuracy and trustworthiness, which are challenging to achieve simultaneously. This work aims to address gaps in the existing literature, including the need for highly accurate white box IDS, a methodology for understanding explanations, small testing datasets, and comparisons between white box and black box models. To achieve these goals, the study employs CL and eclectic RE algorithms. CL models offer innate explainability and high accuracy in IDS applications, while eclectic RE enhances trustworthiness. The contributions of this dissertation include a novel X-IDS architecture featuring Self-Organizing Map (SOM) models that adhere to DARPA’s guidelines for explainable systems, an extended X-IDS architecture incorporating three CL-based algorithms, and a hybrid X-IDS architecture combining a Deep Neural Network (DNN) predictor with a white box eclectic RE explainer. These architectures create more explainable, trustworthy, and accurate X-IDS systems, paving the way for enhanced AI solutions in decision-sensitive domains

    Self-organizing map convergence

    No full text
    Self-organizing maps are artificial neural networks designed for unsupervised machine learning. Here in this article, the authors introduce a new quality measure called the convergence index. The convergence index is a linear combination of map embedding accuracy and estimated topographic accuracy and since it reports a single statistically meaningful number it is perhaps more intuitive to use than other quality measures. The convergence index in the context of clustering problems was proposed by Ultsch as part of his fundamental clustering problem suite as well as real world datasets. First demonstrated is that the convergence index captures the notion that a SOM has learned the multivariate distribution of a training data set by looking at the convergence of the marginals. The convergence index is then used to study the convergence of SOMs with respect to the different parameters that govern self-organizing map learning. One result is that the constant neighborhood function produces better self-organizing map models than the popular Gaussian neighborhood function
    corecore