INVESTIGATING THE EFFECTS OF GENERATIVE-AI RESPONSES ON USER EXPERIENCE AFTER AI HALLUCINATION

Abstract

The integration of generative artificial intelligence (GenAI) systems into our daily lives has led to the phenomenon of "AI hallucination," where AI produces convincing yet incorrect information, undermining both user experience and system credibility. This study investigates the impact of AI's responses, specifically appreciation and apology, on user perception and trust following AI errors. Utilizing attribution theory, we explore whether users prefer AI systems that attribute errors internally or externally and how these attributions affect user satisfaction. A qualitative methodology, featuring interviews with individuals aged 20 to 30 who have experience with conversational AI, has been employed. Respondents preferred AI to apologize in hallucination situations and to attribute the responsibility for the error to the outside world. Results show that transparency in error communication is essential for maintaining user trust, with detailed explanations. The research contributes to the understanding of how politeness and attribution strategies can influence user engagement with AI and has significant implications for AI development, emphasizing the need for error communication strategies that balance transparency and user experience

Similar works

Full text

thumbnail-image

Global Research & Development Services Publishing

redirect
Last time updated on 12/05/2024

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.