36 research outputs found

    Challenges in Collaborative HRI for Remote Robot Teams

    Get PDF
    Collaboration between human supervisors and remote teams of robots is highly challenging, particularly in high-stakes, distant, hazardous locations, such as off-shore energy platforms. In order for these teams of robots to truly be beneficial, they need to be trusted to operate autonomously, performing tasks such as inspection and emergency response, thus reducing the number of personnel placed in harm's way. As remote robots are generally trusted less than robots in close-proximity, we present a solution to instil trust in the operator through a `mediator robot' that can exhibit social skills, alongside sophisticated visualisation techniques. In this position paper, we present general challenges and then take a closer look at one challenge in particular, discussing an initial study, which investigates the relationship between the level of control the supervisor hands over to the mediator robot and how this affects their trust. We show that the supervisor is more likely to have higher trust overall if their initial experience involves handing over control of the emergency situation to the robotic assistant. We discuss this result, here, as well as other challenges and interaction techniques for human-robot collaboration.Comment: 9 pages. Peer reviewed position paper accepted in the CHI 2019 Workshop: The Challenges of Working on Social Robots that Collaborate with People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing Systems, May 2019, Glasgow, U

    Testing the Error Recovery Capabilities of Robotic Speech

    Get PDF
    Trust in Human-Robot Interaction is a widely studied subject, and yet, few studies have examined the ability to speak and how it impacts trust towards a robot. Errors can have a negative impact on perceived trustworthiness of a robot. However, there seem to be mitigating effects, such as using a humanoid robot, which has been shown to be perceived as more trustworthy when having a high error-rate than a more mechanical robot with the same error- rate. We want to use a humanoid robot to test whether speech can increase anthropomorphism and mitigate the effects of errors on trust. For this purpose, we are planning an experiment where participants solve a sequence completion task, with the robot giv- ing suggestions (either verbal or non-verbal) for the solution. In addition, we want to measure whether the degree of error (slight error vs. severe error) has an impact on the participants’ behaviour and the robot’s perceived trustworthiness, since making a severe error would affect trust more than a slight error. Participants will be assigned to three groups, where we will vary the degree of accu- racy of the robot’s answers (correct vs. almost right vs. obviously wrong). They will complete ten series of a sequence completion task and rate trustworthiness and general perception (Godspeed Questionnaire) of the robot. We also present our thoughts on the implications of potential results

    Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload

    Full text link
    Explanations given by automation are often used to promote automation adoption. However, it remains unclear whether explanations promote acceptance of automated vehicles (AVs). In this study, we conducted a within-subject experiment in a driving simulator with 32 participants, using four different conditions. The four conditions included: (1) no explanation, (2) explanation given before or (3) after the AV acted and (4) the option for the driver to approve or disapprove the AV's action after hearing the explanation. We examined four AV outcomes: trust, preference for AV, anxiety and mental workload. Results suggest that explanations provided before an AV acted were associated with higher trust in and preference for the AV, but there was no difference in anxiety and workload. These results have important implications for the adoption of AVs.Comment: 42 pages, 5 figures, 3 Table

    Potential measures for detecting trust changes

    Get PDF
    ABSTRACT It is challenging to quantitatively measure a user's trust in a robot system using traditional survey methods due to their invasiveness and tendency to disrupt the flow of operation. Therefore, we analyzed data from an existing experiment to identify measures which (1) have face validity for measuring trust and (2) align with the collected post-run trust measures. Two measures are promising as real-time indications of a drop in trust. The first is the time between the most recent warning and when the participant reduces the robot's autonomy level. The second is the number of warnings prior to the reduction of the autonomy level

    Trust in Human-Robot Interaction Within Healthcare Services: A Review Study

    Get PDF
    There has always been a dilemma of the extent to which human can rely on machines in different activities of daily living. Ranging from riding on a self-driving car to having an iRobot vacuum clean the living room. However, when it comes to healthcare settings where robots are intended to work next to human, making decision gets difficult because repercussions may jeopardize people’s life. That has led scientists and engineers to take one step back and think out of the box. Having concept of trust under scrutiny, this study helps deciphering complex human-robot interaction (HRI) attributes. Screening essential constituents of what shapes the trust in human mind as s/he is working with a robot will provide a more in-depth insight through how to build and consolidate the trust. In physiotherapeutic realm, this feeds into improving safety protocols and level of comfort; as well as increasing the efficacy of robot-assisted physical therapy and rehabilitation. This paper provides a comprehensive framework for measuring trust through introducing several scenarios that are prevalent in rehabilitation environment. This proposed framework highlights importance of clear communication between physicians and how they expect robot to intervene in a human centered task. In addition, it reflects on patients’ perception of robot assistance. Ultimately, recommendations are made in order to maximize trust earned from the patients which then feeds into enhancing efficacy of the therapy. This is an ongoing study; authors are working with a local hospital to implement the know in a real-world application
    corecore