5 research outputs found

    MIRIAM: A Multimodal Chat-Based Interface for Autonomous Systems

    Full text link
    We present MIRIAM (Multimodal Intelligent inteRactIon for Autonomous systeMs), a multimodal interface to support situation awareness of autonomous vehicles through chat-based interaction. The user is able to chat about the vehicle's plan, objectives, previous activities and mission progress. The system is mixed initiative in that it pro-actively sends messages about key events, such as fault warnings. We will demonstrate MIRIAM using SeeByte's SeeTrack command and control interface and Neptune autonomy simulator.Comment: 2 pages, ICMI'17, 19th ACM International Conference on Multimodal Interaction, November 13-17 2017, Glasgow, U

    Explain Yourself: A Natural Language Interface for Scrutable Autonomous Robots

    Full text link
    Autonomous systems in remote locations have a high degree of autonomy and there is a need to explain what they are doing and why in order to increase transparency and maintain trust. Here, we describe a natural language chat interface that enables vehicle behaviour to be queried by the user. We obtain an interpretable model of autonomy through having an expert 'speak out-loud' and provide explanations during a mission. This approach is agnostic to the type of autonomy model and as expert and operator are from the same user-group, we predict that these explanations will align well with the operator's mental model, increase transparency and assist with operator training.Comment: 2 pages. Peer reviewed position paper accepted in the Explainable Robotic Systems Workshop, ACM Human-Robot Interaction conference, March 2018, Chicago, IL US

    Assessing the relationship between subjective trust, confidence measurements, and mouse trajectory characteristics in an online task

    Full text link
    Trust is essential for our interactions with others but also with artificial intelligence (AI) based systems. To understand whether a user trusts an AI, researchers need reliable measurement tools. However, currently discussed markers mostly rely on expensive and invasive sensors, like electroencephalograms, which may cause discomfort. The analysis of mouse trajectory has been suggested as a convenient tool for trust assessment. However, the relationship between trust, confidence and mouse trajectory is not yet fully understood. To provide more insights into this relationship, we asked participants (n = 146) to rate whether several tweets were offensive while an AI suggested its assessment. Our results reveal which aspects of the mouse trajectory are affected by the users subjective trust and confidence ratings; yet they indicate that these measures might not explain sufficiently the variance to be used on their own. This work examines a potential low-cost trust assessment in AI systems.Comment: Submitted to CHI 2023 and rejecte

    Trust Triggers for Multimodal Command and Control Interfaces

    No full text
    corecore