9 research outputs found
Using Trust in Automation to Enhance Driver-(Semi)Autonomous Vehicle Interaction and Improve Team Performance
Trust in robots has been gathering attention from multiple directions, as it has a special relevance in the theoretical descriptions of human-robot interactions. It is essential for reaching high acceptance and usage rates of robotic technologies in society, as well as for enabling effective human-robot teaming. Researchers have been trying to model the development of trust in robots to improve the overall “rapport” between humans and robots. Unfortunately, miscalibration of trust in automation is a common issue that jeopardizes the effectiveness of automation use. It happens when a user’s trust levels are not appropriate to the capabilities of the automation being used. Users can be: under-trusting the automation—when they do not use the functionalities that the machine can perform correctly because of a “lack of trust”; or over-trusting the automation—when, due to an “excess of trust”, they use the machine in situations where its capabilities are not adequate. The main objective of this work is to examine driver’s trust development in the ADS. We aim to model how risk factors (e.g.: false alarms and misses from the ADS) and the short term interactions associated with these risk factors influence the dynamics of drivers’ trust in the ADS. The driving context facilitates the instrumentation to measure trusting behaviors, such as drivers’ eye movements and usage time of the automated features. Our findings indicate that a reliable characterization of drivers’ trusting behaviors and a consequent estimation of trust levels is possible. We expect that these techniques will permit the design of ADSs able to adapt their behaviors to attempt to adjust driver’s trust levels. This capability could avoid under- and over trusting, which could harm their safety or their performance.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/167861/1/ISTDM-2021-Extended-Abstract-0118.pdfDescription of ISTDM-2021-Extended-Abstract-0118.pdf : PaperSEL
Human-AI Collaboration in Healthcare: A Review and Research Agenda
Advances in Artificial Intelligence (AI) have led to the rise of human-AI collaboration. In healthcare, such collaboration could mitigate the shortage of qualified healthcare workers, assist overworked medical professionals, and improve the quality of healthcare. However, many challenges remain, such as investigating biases in clinical decision-making, the lack of trust in AI and adoption issues. While there is a growing number of studies on the topic, they are in disparate fields, and we lack a summary understanding of this research. To address this issue, this study conducts a literature review to examine prior research, identify gaps, and propose future research directions. Our findings indicate that there are limited studies about the evolving and interactive collaboration process in healthcare, the complementarity of humans and AI, the adoption and perception of AI, and the long-term impact on individuals and healthcare organizations. Additionally, more theory-driven research is needed to inform the design, implementation, and use of collaborative AI for healthcare and to realize its benefits
Exploring Factors Affecting User Trust Across Different Human-Robot Interaction Settings and Cultures
Trust is one of the necessary factors for building a successful human-robot interaction (HRI). This paper investigated how human trust in robots differs across HRI scenarios in two cultures. We conducted two studies in two countries: Saudi Arabia (study 1) and the United Kingdom (study 2). Each study presented three HRI scenarios: a dog robot guiding people with sight impairments, a teleoperated robot in healthcare, and a manufacturing robot. Study 1 shows that participants' trust perception score (TPS) was significantly different across the three scenarios. However, Study 2 results show a slightly significant variation in TPS across the scenarios. We also found that the relevance of trust for a given task is an indicator of a participant's trust. Furthermore, the findings showed that trust scores or factors affecting users' trust vary across cultures. The findings identified novel factors that might affect human trust, such as controllability, usability and risk. The findings direct the HRI community to consider a dynamic and evolving design for modelling human-robot trust because factors affecting humans' trust are evolving and will vary across different settings and cultures
Recommended from our members
Human and AI Trust: Trust Attitude Measurement Instrument Development
With the current progress of Artificial Intelligence (AI) technology and its increasingly broader applications, trust is seen as a required criterion for AI usage, acceptance, and deployment. A robust measurement instrument is essential to correctly evaluate trust from a human-centered perspective. This paper describes the development and validation process of a trust measure instrument, which follows psychometric principles, and consists of a 16-items trust scale. The instrument was built explicitly for research in human-AI interaction to measure trust attitudes towards AI systems from layperson (non-expert) perspective, in the context of AI medical support systems (specifically cancer/health prediction). The results of the six-stages evaluation show that the proposed trust measurement instrument is empirically reliable and valid for systematically measuring and comparing non-experts’ trust in AI medical support systems
Recommended from our members
Meaningful Explanation Effect on User's Trust in an AI Medical System: Designing Explanations for Non-Expert Users
Whereas most research in AI system explanation for healthcare applications looks at developing algorithmic explanations targeted at AI experts or medical professionals, the question we raise is: How do we build meaningful explanations for laypeople? And how does a meaningful explanation affect user's trust perceptions? Our research investigates how the key factors affecting human-AI trust change in the light of human expertise, and how to design explanations specifically targeted at non-experts.
By means of a stage-based design method, we map the ways laypeople understand AI explanations in a User Explanation Model. We also map both medical professionals and AI experts’ practice in an Expert Explanation Model. A Target Explanation Model is then proposed, which represents how experts’ practice and layperson’s understanding can be combined to design meaningful explanations. Design guidelines for meaningful AI explanations are proposed, and a prototype of AI system explanation for non-expert users in a breast cancer scenario is presented and assessed on how it affect users' trust perceptions
Robot Capability and Intention in Trust-based Decisions across Tasks
10.1109/HRI.2019.867308414th ACM/IEEE International Conference on Human-Robot Interaction (HRI)2019-March, 22 March 201939-4