CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
Evaluating people's perceptions of trust in a robot in a repeated interactions study
Authors
A Rossi
AY Lee
+15 more
DH McKnight
EJ de Visser
J Lee
K Dautenhahn
MM de Graaf
MP Haselhuhn
N Ambady
O Schilke
P Chekroun
R Ht
RC Mayer
SC Voelpel
SD Gosling
T Kanda
T Nomura
Publication date
6 November 2021
Publisher
'Springer Science and Business Media LLC'
Doi
Cite
Abstract
Funding Information: Acknowledgment. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 642667 (Safety Enables Cooperation in Uncertain Robotic Environments - SECURE). KD acknowledges funding from the Canada 150 Research Chairs Program. Publisher Copyright: © 2020, Springer Nature Switzerland AG This is a post-peer-review, pre-copyedit version of an article published of 'Rossi A., Dautenhahn K., Koay K.L., Walters M.L., Holthaus P. (2020) Evaluating People’s Perceptions of Trust in a Robot in a Repeated Interactions Study. In: Wagner A.R. et al. (eds) Social Robotics. ICSR 2020. Lecture Notes in Computer Science, vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_38'Trust has been established to be a key factor in fostering human-robot interactions. However, trust can change overtime according to different factors, including a breach of trust due to a robot’s error. In this exploratory study, we observed people’s interactions with a companion robot in a real house, adapted for human-robot interaction experimentation, over three weeks. The interactions happened in six scenarios in which a robot performed different tasks under two different conditions. Each condition included fourteen tasks performed by the robot, either correctly, or with errors with severe consequences on the first or last day of interaction. At the end of each experimental condition, participants were presented with an emergency scenario to evaluate their trust in the robot. We evaluated participants’ trust in the robot by observing their decision to trust the robot during the emergency scenario, and by collecting their views through questionnaires. We concluded that there is a correlation between the timing of an error with severe consequences performed by the robot and the corresponding loss of trust of the human in the robot. In particular, people’s trust is subjected to the initial mental formation
Similar works
Full text
Open in the Core reader
Download PDF
Available Versions
University of Hertfordshire Research Archive
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:uhra.herts.ac.uk:2299/2378...
Last time updated on 30/01/2021
Crossref
See this paper in CORE
Go to the repository landing page
Download from data provider
Last time updated on 11/08/2021
University of Hertfordshire Research Archive
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:uhra.herts.ac.uk:2299/2376...
Last time updated on 30/01/2021