10,272 research outputs found

    Explainable AI over the Internet of Things (IoT): Overview, State-of-the-Art and Future Directions

    Full text link
    Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.Comment: 29 pages, 7 figures, 2 tables. IEEE Open Journal of the Communications Society (2022

    A Survey on Explainable AI for 6G O-RAN: Architecture, Use Cases, Challenges and Research Directions

    Full text link
    The recent O-RAN specifications promote the evolution of RAN architecture by function disaggregation, adoption of open interfaces, and instantiation of a hierarchical closed-loop control architecture managed by RAN Intelligent Controllers (RICs) entities. This paves the road to novel data-driven network management approaches based on programmable logic. Aided by Artificial Intelligence (AI) and Machine Learning (ML), novel solutions targeting traditionally unsolved RAN management issues can be devised. Nevertheless, the adoption of such smart and autonomous systems is limited by the current inability of human operators to understand the decision process of such AI/ML solutions, affecting their trust in such novel tools. eXplainable AI (XAI) aims at solving this issue, enabling human users to better understand and effectively manage the emerging generation of artificially intelligent schemes, reducing the human-to-machine barrier. In this survey, we provide a summary of the XAI methods and metrics before studying their deployment over the O-RAN Alliance RAN architecture along with its main building blocks. We then present various use-cases and discuss the automation of XAI pipelines for O-RAN as well as the underlying security aspects. We also review some projects/standards that tackle this area. Finally, we identify different challenges and research directions that may arise from the heavy adoption of AI/ML decision entities in this context, focusing on how XAI can help to interpret, understand, and improve trust in O-RAN operational networks.Comment: 33 pages, 13 figure

    Accountability in Managing Artificial Intelligence: State of the Art and a way forward for Information Systems Research

    Get PDF
    Establishing accountability for Artificial Intelligence (AI) systems is challenging due to the distribution of responsibilities among multiple actors involved in their development, deployment, and use. Nonetheless, AI accountability is crucial. As AI can affect all aspects of private and professional life, the actors involved in AI lifecycles need to take responsibility for their decisions and actions, be ready to respond to interrogations by those affected by AI and held liable when AI works in unacceptable ways. Despite the significance of AI accountability, the Information Systems research community has not engaged much with the topic and lacks a systematic understanding of existing approaches to it. This paper present the results of a comprehensive conceptual literature review that synthetizes current knowledge on AI accountability. The paper contributes to the IS literature by providing (i) conceptual clarification mapping different accountability conceptualizations; (ii) a comprehensive framework for AI accountability challenges and actionable responses at three different levels: system, process, data and; (iii) a framing of AI accountability as a a socio-technical and organizational problem that IS researchers are well-equipped to study highlighting the need to balance instrumental and humanistic outcomes
    • …
    corecore