Clarity in Complexity: Advancing AI Explainability through Sensemaking

Abstract

This paper explores Explainable Artificial Intelligence (XAI) through a sensemaking lens, addressing the complexity in the extant literature and providing a comprehensive understanding of the process of explainability. Through an exhaustive review of relevant research, we develop a novel framework highlighting the dynamic interactions between AI systems and users in the co-construction of explanations. We conducted a thorough analysis and theoretical synthesis of the extant literature. Based on the results, we developed a framework that shows how explainability emerges as a shared process between humans and machines, rather than a one-sided output. The proposed framework offers valuable insights for enhancing human-AI interactions and contributes to the theoretical foundation of XAI. The findings pave the way for future research avenues, with implications for both academic investigation and practical applications in designing more transparent and effective AI systems

Similar works

Full text

thumbnail-image

ScholarSpace at University of Hawai'i at Manoa

redirect
Last time updated on 01/02/2025

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.

Licence: https://creativecommons.org/licenses/by-nc-nd/4.0/