13 research outputs found

    The ORCA Hub: Explainable Offshore Robotics through Intelligent Interfaces

    Get PDF
    We present the UK Robotics and Artificial Intelligence Hub for Offshore Robotics for Certification of Assets (ORCA Hub), a 3.5 year EPSRC funded, multi-site project. The ORCA Hub vision is to use teams of robots and autonomous intelligent systems (AIS) to work on offshore energy platforms to enable cheaper, safer and more efficient working practices. The ORCA Hub will research, integrate, validate and deploy remote AIS solutions that can operate with existing and future offshore energy assets and sensors, interacting safely in autonomous or semi-autonomous modes in complex and cluttered environments, co-operating with remote operators. The goal is that through the use of such robotic systems offshore, the need for personnel will decrease. To enable this to happen, the remote operator will need a high level of situation awareness and key to this is the transparency of what the autonomous systems are doing and why. This increased transparency will facilitate a trusting relationship, which is particularly key in high-stakes, hazardous situations.Comment: 2 pages. Peer reviewed position paper accepted in the Explainable Robotic Systems Workshop, ACM Human-Robot Interaction conference, March 2018, Chicago, IL US

    The Impact of Explanations on AI Competency Prediction in VQA

    Full text link
    Explainability is one of the key elements for building trust in AI systems. Among numerous attempts to make AI explainable, quantifying the effect of explanations remains a challenge in conducting human-AI collaborative tasks. Aside from the ability to predict the overall behavior of AI, in many applications, users need to understand an AI agent's competency in different aspects of the task domain. In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA). We quantify users' understanding of competency, based on the correlation between the actual system performance and user rankings. We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model. Each group of users sees only one kind of explanation to rank the competencies of the VQA model. The proposed model is evaluated through between-subject experiments to probe explanations' impact on the user's perception of competency. The comparison between two VQA models shows BERT based explanations and the use of object features improve the user's prediction of the model's competencies.Comment: Submitted to HCCAI 202

    The challenges of first-and second-order belief reasoning in explainable human-robot interaction

    Get PDF
    Current approaches to implement eXplainable Autonomous Robots (XAR) are dominantly based on Reinforcement Learning (RL), which are suitable for modelling and correcting people’s first-order mental state attributions to robots. Our recent findings show that people also rely on attributing second-order beliefs (i.e., beliefs about beliefs) to robots to interpret their behavior. However, robots arguably form and act primarily on first-order beliefs and desires (about things in the environment) and do not have a functional “theory of mind”. Moreover, RL models may be incapable to appropriately address second-order belief attribution errors. This paper aims to open a discussion of what our recent findings on second-order mental state attribution to robots imply for current approaches to XAR
    corecore