9 research outputs found

    Using Trust in Automation to Enhance Driver-(Semi)Autonomous Vehicle Interaction and Improve Team Performance

    Full text link
    Trust in robots has been gathering attention from multiple directions, as it has a special relevance in the theoretical descriptions of human-robot interactions. It is essential for reaching high acceptance and usage rates of robotic technologies in society, as well as for enabling effective human-robot teaming. Researchers have been trying to model the development of trust in robots to improve the overall “rapport” between humans and robots. Unfortunately, miscalibration of trust in automation is a common issue that jeopardizes the effectiveness of automation use. It happens when a user’s trust levels are not appropriate to the capabilities of the automation being used. Users can be: under-trusting the automation—when they do not use the functionalities that the machine can perform correctly because of a “lack of trust”; or over-trusting the automation—when, due to an “excess of trust”, they use the machine in situations where its capabilities are not adequate. The main objective of this work is to examine driver’s trust development in the ADS. We aim to model how risk factors (e.g.: false alarms and misses from the ADS) and the short term interactions associated with these risk factors influence the dynamics of drivers’ trust in the ADS. The driving context facilitates the instrumentation to measure trusting behaviors, such as drivers’ eye movements and usage time of the automated features. Our findings indicate that a reliable characterization of drivers’ trusting behaviors and a consequent estimation of trust levels is possible. We expect that these techniques will permit the design of ADSs able to adapt their behaviors to attempt to adjust driver’s trust levels. This capability could avoid under- and over trusting, which could harm their safety or their performance.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/167861/1/ISTDM-2021-Extended-Abstract-0118.pdfDescription of ISTDM-2021-Extended-Abstract-0118.pdf : PaperSEL

    Human-AI Collaboration in Healthcare: A Review and Research Agenda

    Get PDF
    Advances in Artificial Intelligence (AI) have led to the rise of human-AI collaboration. In healthcare, such collaboration could mitigate the shortage of qualified healthcare workers, assist overworked medical professionals, and improve the quality of healthcare. However, many challenges remain, such as investigating biases in clinical decision-making, the lack of trust in AI and adoption issues. While there is a growing number of studies on the topic, they are in disparate fields, and we lack a summary understanding of this research. To address this issue, this study conducts a literature review to examine prior research, identify gaps, and propose future research directions. Our findings indicate that there are limited studies about the evolving and interactive collaboration process in healthcare, the complementarity of humans and AI, the adoption and perception of AI, and the long-term impact on individuals and healthcare organizations. Additionally, more theory-driven research is needed to inform the design, implementation, and use of collaborative AI for healthcare and to realize its benefits

    Exploring Factors Affecting User Trust Across Different Human-Robot Interaction Settings and Cultures

    Get PDF
    Trust is one of the necessary factors for building a successful human-robot interaction (HRI). This paper investigated how human trust in robots differs across HRI scenarios in two cultures. We conducted two studies in two countries: Saudi Arabia (study 1) and the United Kingdom (study 2). Each study presented three HRI scenarios: a dog robot guiding people with sight impairments, a teleoperated robot in healthcare, and a manufacturing robot. Study 1 shows that participants' trust perception score (TPS) was significantly different across the three scenarios. However, Study 2 results show a slightly significant variation in TPS across the scenarios. We also found that the relevance of trust for a given task is an indicator of a participant's trust. Furthermore, the findings showed that trust scores or factors affecting users' trust vary across cultures. The findings identified novel factors that might affect human trust, such as controllability, usability and risk. The findings direct the HRI community to consider a dynamic and evolving design for modelling human-robot trust because factors affecting humans' trust are evolving and will vary across different settings and cultures

    Robot Capability and Intention in Trust-based Decisions across Tasks

    No full text
    10.1109/HRI.2019.867308414th ACM/IEEE International Conference on Human-Robot Interaction (HRI)2019-March, 22 March 201939-4
    corecore