In the user's eyes we find trust: Using gaze data as a predictor or trust in an artifical intelligence

Abstract

Trust is essential for our interactions with others but also with artificial intelligence (AI) based systems. To understand whether a user trusts an AI, researchers need reliable measurement tools. However, currently discussed markers mostly rely on expensive and invasive sensors, like electroencephalograms, which may cause discomfort. The analysis of gaze data has been suggested as a convenient tool for trust assessment. However, the relationship between trust and several aspects of the gaze behaviour is not yet fully understood. To provide more insights into this relationship, we propose a exploration study in virtual reality where participants have to perform a sorting task together with a simulated AI in a simulated robotic arm embedded in a gaming. We discuss the potential benefits of this approach and outline our study design in this submission.Comment: Workshop submission of a proposed research project at TRAIT 2023 (held at CHI2023 in Hamburg

    Similar works

    Full text

    thumbnail-image

    Available Versions