566 research outputs found

    An Adversarial Approach for Explainable AI in Intrusion Detection Systems

    Full text link
    Despite the growing popularity of modern machine learning techniques (e.g. Deep Neural Networks) in cyber-security applications, most of these models are perceived as a black-box for the user. Adversarial machine learning offers an approach to increase our understanding of these models. In this paper we present an approach to generate explanations for incorrect classifications made by data-driven Intrusion Detection Systems (IDSs). An adversarial approach is used to find the minimum modifications (of the input features) required to correctly classify a given set of misclassified samples. The magnitude of such modifications is used to visualize the most relevant features that explain the reason for the misclassification. The presented methodology generated satisfactory explanations that describe the reasoning behind the mis-classifications, with descriptions that match expert knowledge. The advantages of the presented methodology are: 1) applicable to any classifier with defined gradients. 2) does not require any modification of the classifier model. 3) can be extended to perform further diagnosis (e.g. vulnerability assessment) and gain further understanding of the system. Experimental evaluation was conducted on the NSL-KDD99 benchmark dataset using Linear and Multilayer perceptron classifiers. The results are shown using intuitive visualizations in order to improve the interpretability of the results

    Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research

    Get PDF
    This survey presents a comprehensive review of current literature on Explainable Artificial Intelligence (XAI) methods for cyber security applications. Due to the rapid development of Internet-connected systems and Artificial Intelligence in recent years, Artificial Intelligence including Machine Learning and Deep Learning has been widely utilized in the fields of cyber security including intrusion detection, malware detection, and spam filtering. However, although Artificial Intelligence-based approaches for the detection and defense of cyber attacks and threats are more advanced and efficient compared to the conventional signature-based and rule-based cyber security strategies, most Machine Learning-based techniques and Deep Learning-based techniques are deployed in the “black-box” manner, meaning that security experts and customers are unable to explain how such procedures reach particular conclusions. The deficiencies of transparencies and interpretability of existing Artificial Intelligence techniques would decrease human users’ confidence in the models utilized for the defense against cyber attacks, especially in current situations where cyber attacks become increasingly diverse and complicated. Therefore, it is essential to apply XAI in the establishment of cyber security models to create more explainable models while maintaining high accuracy and allowing human users to comprehend, trust, and manage the next generation of cyber defense mechanisms. Although there are papers reviewing Artificial Intelligence applications in cyber security areas and the vast literature on applying XAI in many fields including healthcare, financial services, and criminal justice, the surprising fact is that there are currently no survey research articles that concentrate on XAI applications in cyber security. Therefore, the motivation behind the survey is to bridge the research gap by presenting a detailed and up-to-date survey of XAI approaches applicable to issues in the cyber security field. Our work is the first to propose a clear roadmap for navigating the XAI literature in the context of applications in cyber security

    Explainable Neural Networks based Anomaly Detection for Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPSs) are the core of modern critical infrastructure (e.g. power-grids) and securing them is of paramount importance. Anomaly detection in data is crucial for CPS security. While Artificial Neural Networks (ANNs) are strong candidates for the task, they are seldom deployed in safety-critical domains due to the perception that ANNs are black-boxes. Therefore, to leverage ANNs in CPSs, cracking open the black box through explanation is essential. The main objective of this dissertation is developing explainable ANN-based Anomaly Detection Systems for Cyber-Physical Systems (CP-ADS). The main objective was broken down into three sub-objectives: 1) Identifying key-requirements that an explainable CP-ADS should satisfy, 2) Developing supervised ANN-based explainable CP-ADSs, 3) Developing unsupervised ANN-based explainable CP-ADSs. In achieving those objectives, this dissertation provides the following contributions: 1) a set of key-requirements that an explainable CP-ADS should satisfy, 2) a methodology for deriving summaries of the knowledge of a trained supervised CP-ADS, 3) a methodology for validating derived summaries, 4) an unsupervised neural network methodology for learning cyber-physical (CP) behavior, 5) a methodology for visually and linguistically explaining the learned CP behavior. All the methods were implemented on real-world and benchmark datasets. The set of key-requirements presented in the first contribution was used to evaluate the performance of the presented methods. The successes and limitations of the presented methods were identified. Furthermore, steps that can be taken to overcome the limitations were proposed. Therefore, this dissertation takes several necessary steps toward developing explainable ANN-based CP-ADS and serves as a framework that can be expanded to develop trustworthy ANN-based CP-ADSs

    eXplainable and Reliable Against Adversarial Machine Learning in Data Analytics

    Get PDF
    Machine learning (ML) algorithms are nowadays widely adopted in different contexts to perform autonomous decisions and predictions. Due to the high volume of data shared in the recent years, ML algorithms are more accurate and reliable since training and testing phases are more precise. An important concept to analyze when defining ML algorithms concerns adversarial machine learning attacks. These attacks aim to create manipulated datasets to mislead ML algorithm decisions. In this work, we propose new approaches able to detect and mitigate malicious adversarial machine learning attacks against a ML system. In particular, we investigate the Carlini-Wagner (CW), the fast gradient sign method (FGSM) and the Jacobian based saliency map (JSMA) attacks. The aim of this work is to exploit detection algorithms as countermeasures to these attacks. Initially, we performed some tests by using canonical ML algorithms with a hyperparameters optimization to improve metrics. Then, we adopt original reliable AI algorithms, either based on eXplainable AI (Logic Learning Machine) or Support Vector Data Description (SVDD). The obtained results show how the classical algorithms may fail to identify an adversarial attack, while the reliable AI methodologies are more prone to correctly detect a possible adversarial machine learning attack. The evaluation of the proposed methodology was carried out in terms of good balance between FPR and FNR on real world application datasets: Domain Name System (DNS) tunneling, Vehicle Platooning and Remaining Useful Life (RUL). In addition, a statistical analysis was performed to improve the robustness of the trained models, including evaluating their performance in terms of runtime and memory consumption

    Explainable AI-based Intrusion Detection in the Internet of Things

    Get PDF
    The revolution of Artificial Intelligence (AI) has brought about a significant evolution in the landscape of cyberattacks. In particular, with the increasing power and capabilities of AI, cyberattackers can automate tasks, analyze vast amounts of data, and identify vulnerabilities with greater precision. On the other hand, despite the multiple benefits of the Internet of Things (IoT), it raises severe security issues. Therefore, it is evident that the presence of efficient intrusion detection mechanisms is critical. Although Machine Learning (ML) and Deep Learning (DL)-based IDS have already demonstrated their detection efficiency, they still suffer from false alarms and explainability issues that do not allow security administrators to trust them completely compared to conventional signature/specification-based IDS. In light of the aforementioned remarks, in this paper, we introduce an AI-powered IDS with explainability functions for the IoT. The proposed IDS relies on ML and DL methods, while the SHapley Additive exPlanations (SHAP) method is used to explain decision-making. The evaluation results demonstrate the efficiency of the proposed IDS in terms of detection performance and explainable AI (XAI)

    Impacts and Risk of Generative AI Technology on Cyber Defense

    Full text link
    Generative Artificial Intelligence (GenAI) has emerged as a powerful technology capable of autonomously producing highly realistic content in various domains, such as text, images, audio, and videos. With its potential for positive applications in creative arts, content generation, virtual assistants, and data synthesis, GenAI has garnered significant attention and adoption. However, the increasing adoption of GenAI raises concerns about its potential misuse for crafting convincing phishing emails, generating disinformation through deepfake videos, and spreading misinformation via authentic-looking social media posts, posing a new set of challenges and risks in the realm of cybersecurity. To combat the threats posed by GenAI, we propose leveraging the Cyber Kill Chain (CKC) to understand the lifecycle of cyberattacks, as a foundational model for cyber defense. This paper aims to provide a comprehensive analysis of the risk areas introduced by the offensive use of GenAI techniques in each phase of the CKC framework. We also analyze the strategies employed by threat actors and examine their utilization throughout different phases of the CKC, highlighting the implications for cyber defense. Additionally, we propose GenAI-enabled defense strategies that are both attack-aware and adaptive. These strategies encompass various techniques such as detection, deception, and adversarial training, among others, aiming to effectively mitigate the risks posed by GenAI-induced cyber threats
    • …
    corecore