28,295 research outputs found

    The use of UTAUT and Post Acceptance models to investigate the attitude towards a telepresence robot in an educational setting

    Get PDF
    (1) Background: In the last decade, various investigations into the field of robotics have created several opportunities for further innovation to be possible in student education. However, despite scientific evidence, there is still strong scepticism surrounding the use of robots in some social fields, such as personal care and education; (2) Methods: In this research, we present a new tool named: HANCON model that was developed merging and extending the constructs of two solid and proven models: the Unified Theory of Acceptance and Use of Technology (UTAUT) model to examine the factors that may influence the decision to use a telepresence robot as an instrument in educational practice, and the Post Acceptance Model to evaluate acceptability after the actual use of a telepresence robot. The new tool is implemented and used to study the acceptance of a Double telepresence robot by 112 pre-service teachers in an educational setting; (3) Results: The analysis of the experimental results predicts and demonstrate a positive attitude towards the use of telepresence robot in a school setting and confirm the applicability of the model in an educational context; (4) Conclusions: The constructs of the HANCON model could predict and explain the acceptance of social telepresence robots in social contexts

    Robot Mindreading and the Problem of Trust

    Get PDF
    This paper raises three questions regarding the attribution of beliefs, desires, and intentions to robots. The first one is whether humans in fact engage in robot mindreading. If they do, this raises a second question: does robot mindreading foster trust towards robots? Both of these questions are empirical, and I show that the available evidence is insufficient to answer them. Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should developers and engineers promote robot mindreading in view of their stated goal of enhancing transparency? My worry here is that by attempting to make robots more mind-readable, they are abandoning the project of understanding automatic decision processes. Features that enhance mind-readability are prone to make the factors that determine automatic decisions even more opaque than they already are. And current strategies to eliminate opacity do not enhance mind-readability. The last part of the paper discusses different ways to analyze this apparent trade-off and suggests that a possible solution must adopt tolerable degrees of opacity that depend on pragmatic factors connected to the level of trust required for the intended uses of the robot

    Does A Loss of Social Credibility Impact Robot Safety?

    Get PDF
    This position paper discusses the safety-related functions performed by assistive robots and explores the relationship between trust and effective safety risk mitigation. We identify a measure of the robot’s social effectiveness, termed social credibility, and present a discussion of how social credibility may be gained and lost. This paper’s contribution is the identification of a link between social credibility and safety-related performance. Accordingly, we draw on analyses of existing systems to demonstrate how an assistive robot’s safety-critical functionality can be impaired by a loss of social credibility. In addition, we present a discussion of some of the consequences of prioritising either safety-related functionality or social engagement. We propose the identification of a mixed-criticality scheduling algorithm in order to maximise both safety-related performance and social engagement

    Getting to know Pepper : Effects of people’s awareness of a robot’s capabilities on their trust in the robot

    Get PDF
    © 2018 Association for Computing MachineryThis work investigates how human awareness about a social robot’s capabilities is related to trusting this robot to handle different tasks. We present a user study that relates knowledge on different quality levels to participant’s ratings of trust. Secondary school pupils were asked to rate their trust in the robot after three types of exposures: a video demonstration, a live interaction, and a programming task. The study revealed that the pupils’ trust is positively affected across different domains after each session, indicating that human users trust a robot more the more awareness about the robot they have
    • …
    corecore