10,272 research outputs found
Explainable AI over the Internet of Things (IoT): Overview, State-of-the-Art and Future Directions
Explainable Artificial Intelligence (XAI) is transforming the field of
Artificial Intelligence (AI) by enhancing the trust of end-users in machines.
As the number of connected devices keeps on growing, the Internet of Things
(IoT) market needs to be trustworthy for the end-users. However, existing
literature still lacks a systematic and comprehensive survey work on the use of
XAI for IoT. To bridge this lacking, in this paper, we address the XAI
frameworks with a focus on their characteristics and support for IoT. We
illustrate the widely-used XAI services for IoT applications, such as security
enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and
Internet of City Things (IoCT). We also suggest the implementation choice of
XAI models over IoT systems in these applications with appropriate examples and
summarize the key inferences for future works. Moreover, we present the
cutting-edge development in edge XAI structures and the support of
sixth-generation (6G) communication services for IoT applications, along with
key inferences. In a nutshell, this paper constitutes the first holistic
compilation on the development of XAI-based frameworks tailored for the demands
of future IoT use cases.Comment: 29 pages, 7 figures, 2 tables. IEEE Open Journal of the
Communications Society (2022
A Survey on Explainable AI for 6G O-RAN: Architecture, Use Cases, Challenges and Research Directions
The recent O-RAN specifications promote the evolution of RAN architecture by
function disaggregation, adoption of open interfaces, and instantiation of a
hierarchical closed-loop control architecture managed by RAN Intelligent
Controllers (RICs) entities. This paves the road to novel data-driven network
management approaches based on programmable logic. Aided by Artificial
Intelligence (AI) and Machine Learning (ML), novel solutions targeting
traditionally unsolved RAN management issues can be devised. Nevertheless, the
adoption of such smart and autonomous systems is limited by the current
inability of human operators to understand the decision process of such AI/ML
solutions, affecting their trust in such novel tools. eXplainable AI (XAI) aims
at solving this issue, enabling human users to better understand and
effectively manage the emerging generation of artificially intelligent schemes,
reducing the human-to-machine barrier. In this survey, we provide a summary of
the XAI methods and metrics before studying their deployment over the O-RAN
Alliance RAN architecture along with its main building blocks. We then present
various use-cases and discuss the automation of XAI pipelines for O-RAN as well
as the underlying security aspects. We also review some projects/standards that
tackle this area. Finally, we identify different challenges and research
directions that may arise from the heavy adoption of AI/ML decision entities in
this context, focusing on how XAI can help to interpret, understand, and
improve trust in O-RAN operational networks.Comment: 33 pages, 13 figure
Recommended from our members
Studying RFID adoption by SMES in the Taiwanese IT industry
With the advent of Radio Frequency Identification (RFID), organisations have the opportunity to rethink how their organisation will operate and integrate in the supply chain. Especially for Small to Medium Sized Enterprises (SMEs), that they have limited resources adopting such an innovative technology (i.e. RFID) can be daunting. Literature indicates that SMEs that deal with implementation have so far only a few guidelines regarding specific opportunities and risks. This research is therefore trying to fill the gap by employing Exploratory Factor Analysis (EFA) techniques and utilising a questionnaire survey with the aim of exploring the factors that affect SMEs’ RFID adoption in the Taiwan Information Technology (IT) manufacturing industry. In doing so, the adoption factors which are classified into 3 different adopters categories named ready adopter (cost and management), initiator adopter (competitiveness and process efficiency) and unprepared adopter (IT management difficulties, IT implementation difficulties and cost of implementation) using EFA technique. A SMEs RFID adoption model is then proposed. It is anticipated that the findings of this research will not only enhance the research in RFID adoption in SMEs, but can also act as a reference for practitioners in the industry and researchers in the academic field
Accountability in Managing Artificial Intelligence: State of the Art and a way forward for Information Systems Research
Establishing accountability for Artificial Intelligence (AI) systems is challenging due to the distribution of responsibilities among multiple actors involved in their development, deployment, and use. Nonetheless, AI accountability is crucial. As AI can affect all aspects of private and professional life, the actors involved in AI lifecycles need to take responsibility for their decisions and actions, be ready to respond to interrogations by those affected by AI and held liable when AI works in unacceptable ways. Despite the significance of AI accountability, the Information Systems research community has not engaged much with the topic and lacks a systematic understanding of existing approaches to it. This paper present the results of a comprehensive conceptual literature review that synthetizes current knowledge on AI accountability. The paper contributes to the IS literature by providing (i) conceptual clarification mapping different accountability conceptualizations; (ii) a comprehensive framework for AI accountability challenges and actionable responses at three different levels: system, process, data and; (iii) a framing of AI accountability as a a socio-technical and organizational problem that IS researchers are well-equipped to study highlighting the need to balance instrumental and humanistic outcomes
- …