464 research outputs found

    Creating an Explainable Intrusion Detection System Using Self Organizing Maps

    Full text link
    Modern Artificial Intelligence (AI) enabled Intrusion Detection Systems (IDS) are complex black boxes. This means that a security analyst will have little to no explanation or clarification on why an IDS model made a particular prediction. A potential solution to this problem is to research and develop Explainable Intrusion Detection Systems (X-IDS) based on current capabilities in Explainable Artificial Intelligence (XAI). In this paper, we create a Self Organizing Maps (SOMs) based X-IDS system that is capable of producing explanatory visualizations. We leverage SOM's explainability to create both global and local explanations. An analyst can use global explanations to get a general idea of how a particular IDS model computes predictions. Local explanations are generated for individual datapoints to explain why a certain prediction value was computed. Furthermore, our SOM based X-IDS was evaluated on both explanation generation and traditional accuracy tests using the NSL-KDD and the CIC-IDS-2017 datasets

    Prevention in Healthcare: An Explainable AI Approach

    Get PDF
    Intrusion prevention is a critical aspect of maintaining the security of healthcare systems, especially in the context of sensitive patient data. Explainable AI can provide a way to improve the effectiveness of intrusion prevention by using machine learning algorithms to detect and prevent security breaches in healthcare systems. This approach not only helps ensure the confidentiality, integrity, and availability of patient data but also supports regulatory compliance. By providing clear and interpretable explanations for its decisions, explainable AI can enable healthcare professionals to understand the reasoning behind the intrusion detection system's alerts and take appropriate action. This paper explores the application of explainable AI for intrusion prevention in healthcare and its potential benefits for maintaining the security of healthcare systems

    Cyber Sentinel: Exploring Conversational Agents in Streamlining Security Tasks with GPT-4

    Full text link
    In an era where cyberspace is both a battleground and a backbone of modern society, the urgency of safeguarding digital assets against ever-evolving threats is paramount. This paper introduces Cyber Sentinel, an innovative task-oriented cybersecurity dialogue system that is effectively capable of managing two core functions: explaining potential cyber threats within an organization to the user, and taking proactive/reactive security actions when instructed by the user. Cyber Sentinel embodies the fusion of artificial intelligence, cybersecurity domain expertise, and real-time data analysis to combat the multifaceted challenges posed by cyber adversaries. This article delves into the process of creating such a system and how it can interact with other components typically found in cybersecurity organizations. Our work is a novel approach to task-oriented dialogue systems, leveraging the power of chaining GPT-4 models combined with prompt engineering across all sub-tasks. We also highlight its pivotal role in enhancing cybersecurity communication and interaction, concluding that not only does this framework enhance the system's transparency (Explainable AI) but also streamlines the decision-making process and responding to threats (Actionable AI), therefore marking a significant advancement in the realm of cybersecurity communication

    Explainable Neural Networks based Anomaly Detection for Cyber-Physical Systems

    Get PDF
    Cyber-Physical Systems (CPSs) are the core of modern critical infrastructure (e.g. power-grids) and securing them is of paramount importance. Anomaly detection in data is crucial for CPS security. While Artificial Neural Networks (ANNs) are strong candidates for the task, they are seldom deployed in safety-critical domains due to the perception that ANNs are black-boxes. Therefore, to leverage ANNs in CPSs, cracking open the black box through explanation is essential. The main objective of this dissertation is developing explainable ANN-based Anomaly Detection Systems for Cyber-Physical Systems (CP-ADS). The main objective was broken down into three sub-objectives: 1) Identifying key-requirements that an explainable CP-ADS should satisfy, 2) Developing supervised ANN-based explainable CP-ADSs, 3) Developing unsupervised ANN-based explainable CP-ADSs. In achieving those objectives, this dissertation provides the following contributions: 1) a set of key-requirements that an explainable CP-ADS should satisfy, 2) a methodology for deriving summaries of the knowledge of a trained supervised CP-ADS, 3) a methodology for validating derived summaries, 4) an unsupervised neural network methodology for learning cyber-physical (CP) behavior, 5) a methodology for visually and linguistically explaining the learned CP behavior. All the methods were implemented on real-world and benchmark datasets. The set of key-requirements presented in the first contribution was used to evaluate the performance of the presented methods. The successes and limitations of the presented methods were identified. Furthermore, steps that can be taken to overcome the limitations were proposed. Therefore, this dissertation takes several necessary steps toward developing explainable ANN-based CP-ADS and serves as a framework that can be expanded to develop trustworthy ANN-based CP-ADSs

    Intelligent data processing

    Get PDF
    Seminario realizado en U & P U Patel Department of Computer Engineering, Chandubhai S. Patel Institute of Technology, Charotar University of Science And Technology (CHARUSAT), Changa-388421, Gujarat, India 2021[EN]In recent years, disruptive technologies have emerged and have revolutionized our communication capabilities over the internet. One of those technologies is Deep Learning. It fits under the broader branch of Artificial Intelligence known as Machine Learnin

    A Survey on Explainable AI for 6G O-RAN: Architecture, Use Cases, Challenges and Research Directions

    Full text link
    The recent O-RAN specifications promote the evolution of RAN architecture by function disaggregation, adoption of open interfaces, and instantiation of a hierarchical closed-loop control architecture managed by RAN Intelligent Controllers (RICs) entities. This paves the road to novel data-driven network management approaches based on programmable logic. Aided by Artificial Intelligence (AI) and Machine Learning (ML), novel solutions targeting traditionally unsolved RAN management issues can be devised. Nevertheless, the adoption of such smart and autonomous systems is limited by the current inability of human operators to understand the decision process of such AI/ML solutions, affecting their trust in such novel tools. eXplainable AI (XAI) aims at solving this issue, enabling human users to better understand and effectively manage the emerging generation of artificially intelligent schemes, reducing the human-to-machine barrier. In this survey, we provide a summary of the XAI methods and metrics before studying their deployment over the O-RAN Alliance RAN architecture along with its main building blocks. We then present various use-cases and discuss the automation of XAI pipelines for O-RAN as well as the underlying security aspects. We also review some projects/standards that tackle this area. Finally, we identify different challenges and research directions that may arise from the heavy adoption of AI/ML decision entities in this context, focusing on how XAI can help to interpret, understand, and improve trust in O-RAN operational networks.Comment: 33 pages, 13 figure

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities
    • …
    corecore