8 research outputs found

    Meeting the Gaze of the Robot: A Phenomenological Analysis on Human-Robot Empathy

    Get PDF
    This paper discusses the possibility of the phenomenon of empathy between humans and robots, starting from what happens during their eye contact. First, it is shown, through the most relevant results of HRI studies on this matter, what are the most important effects of the robot gaze on human emotions and behaviour. Secondly, these effects are compared to what happens during the phenomenon of empathy between humans, taking inspiration from the studies of Edmund Husserl and Edith Stein. Finally, similarities and differences between human-human and human-robot empathy are conceptualized through Merleau-Ponty’s idea of flesh, which is the extended bodily element of the world. If there is a common concept of body, including both machine-bodies and living bodies, then a transcorporeal analogy takes place, thus explaining why the phenomenon of empathy occurs both in human-human and human-robot interactions

    Can children take advantage of Nao gaze-based hints during gameplay?

    Get PDF
    This paper presents a study that analyzes the effects of robots’ gaze hints on children's performance in a card-matching game. We conducted a within-subjects study, in which children played a card game “Memory” in the presence of a robot tutor in two sessions. In one session, the robot gave hints to help the child find matching cards by looking at the correct match and, in the other session, the robot only looked at the child and did not give them any help. Our findings show that the use of gaze hints (help condition) made the matching task significantly easier and that children used a significantly fewer number of tries than without help. This study provides guidelines on how to design interactive behaviors for robots taking the role of tutors to elicit help-seeking behavior in children.Postprint (author's final draft

    Gaze-based interaction for effective tutoring with social robots

    Get PDF

    Gaze-based interaction for effective tutoring with social robots

    Get PDF

    Humanoid-based protocols to study social cognition

    Get PDF
    Social cognition is broadly defined as the way humans understand and process their interactions with other humans. In recent years, humans have become more and more used to interact with non-human agents, such as technological artifacts. Although these interactions have been restricted to human-controlled artifacts, they will soon include interactions with embodied and autonomous mechanical agents, i.e., robots. This challenge has motivated an area of research related to the investigation of human reactions towards robots, widely referred to as Human-Robot Interaction (HRI). Classical HRI protocols often rely on explicit measures, e.g., subjective reports. Therefore, they cannot address the quantification of the crucial implicit social cognitive processes that are evoked during an interaction. This thesis aims to develop a link between cognitive neuroscience and human-robot interaction (HRI) to study social cognition. This approach overcomes methodological constraints of both fields, allowing to trigger and capture the mechanisms of real-life social interactions while ensuring high experimental control. The present PhD work demonstrates this through the systematic study of the effect of online eye contact on gaze-mediated orienting of attention. The study presented in Publication I aims to adapt the gaze-cueing paradigm from cognitive science to an objective neuroscientific HRI protocol. Furthermore, it investigates whether the gaze-mediated orienting of attention is sensitive to the establishment of eye contact. The study replicates classic screen-based findings of attentional orienting mediated by gaze both at behavioral and neural levels, highlighting the feasibility and the scientific value of adding neuroscientific methods to HRI protocols. The aim of the study presented in Publication II is to examine whether and how real-time eye contact affects the dual-component model of joint attention orienting. To this end, cue validity and stimulus-to-onset asynchrony are also manipulated. The results show an interactive effect of strategic (cue validity) and social (eye contact) top-down components on the botton-up reflexive component of gaze-mediated orienting of attention. The study presented in Publication III aims to examine the subjective engagement and attribution of human likeness towards the robot depending on established eye contact or not during a joint attention task. Subjective reports show that eye contact increases human likeness attribution and feelings of engagement with the robot compared to a no-eye contact condition. The aim of the study presented in Publication IV is to investigate whether eye contact established by a humanoid robot affects objective measures of engagement (i.e. joint attention and fixation durations), and subjective feelings of engagement with the robot during a joint attention task. Results show that eye contact modulates attentional engagement, with longer fixations at the robot’s face and cueing effect when the robot establishes eye contact. In contrast, subjective reports show that the feeling of being engaged with the robot in an HRI protocol is not modulated by real-time eye contact. This study further supports the necessity for adding objective methods to HRI. Overall, this PhD work shows that embodied artificial agents can advance the theoretical knowledge of social cognitive mechanisms by serving as sophisticated interactive stimuli of high ecological validity and excellent experimental control. Moreover, humanoid-based protocols grounded in cognitive science can advance the HRI community by informing about the exact cognitive mechanisms that are present during HRI

    High Social Acceptance of Head Gaze Loosely Synchronized with Speech for Social Robots

    Get PDF
    This research demonstrates that robots can achieve socially acceptable interactions, using loosely synchronized head gaze-speech, without understanding the semantics of the dialog. Prior approaches used tightly synchronized head gaze-speech, which requires significant human effort and time to manually annotate synchronization events in advance, restricting interactive dialog, and requiring the operator to act as a puppeteer. This approach has two novel aspects. First, it uses affordances in the sentence structure, time delays, and typing to achieve autonomous synchronization of head gaze-speech. Second, it is implemented within a behavioral robotics framework derived from 32 previous implementations. The efficacy of the loosely synchronized approach was validated through a 93-participant 1 x 3 (loosely synchronized head gaze-speech, tightly synchronized head gaze-speech, no-head gazespeech) between-subjects experiment using the “Survivor Buddy” rescue robot in a victim management scenario. The results indicated that the social acceptance of loosely synchronized head gaze-speech is similar to tightly synchronized head gazespeech (manual annotation), and preferred to the no head gaze-speech case. These findings contribute to the study of social robotics in three ways. First, the research overall contributes to a fundamental understanding of the role of social head gaze in social acceptance, and the production of social head gaze. Second, it shows that autonomously generated head gaze-speech coordination is both possible and acceptable. Third, the behavioral robotics framework simplifies creation, analysis, and comparison of implementations

    Exploring Human Teachers' Interpretations of Trainee Robots' Nonverbal Behaviour and Errors

    Get PDF
    In the near future, socially intelligent robots that can learn new tasks from humans may become widely available and gain an opportunity to help people more and more. In order to successfully play a role, not only should intelligent robots be able to interact effectively with humans while they are being taught, but also humans should have the assurance to trust these robots after teaching them how to perform tasks. When human students learn, they usually provide nonverbal cues to display their understanding of and interest in the material. For example, they sometimes nod, make eye contact or show meaningful facial expressions. Likewise, a humanoid robot's nonverbal social cues may enhance the learning process, in case the provided cues are legible for human teachers. To inform designing such nonverbal interaction techniques for intelligent robots, our first work investigates humans' interpretations of nonverbal cues provided by a trainee robot. Through an online experiment (with 167 participants), we examine how different gaze patterns and arm movements with various speeds and different kinds of pauses, displayed by a student robot when practising a physical task, impact teachers' understandings of the robot’s attributes. We show that a robot can appear differently in terms of its confidence, proficiency, eagerness to learn, etc., by systematically adjusting those nonverbal factors. Human students sometimes make mistakes while practising a task, but teachers may be forgiving about them. Intelligent robots are machines, and therefore, they may behave erroneously in certain situations. Our second study examines if human teachers for a robot overlook its small mistakes made when practising a recently taught task, in case the robot has already shown significant improvements. By means of an online rating experiment (with 173 participants), we first determine how severe a robot’s errors in a household task (i.e., preparing food) are perceived. We then use that information to design and conduct another experiment (with 139 participants) in which participants are given the experience of teaching trainee robots. According to our results, perceptions of teachers improve as the robots get better in performing the task. We also show that while bigger errors have a greater negative impact on human teachers' trust compared with the smaller ones, even a small error can significantly destroy trust in a trainee robot. This effect is also correlated with the personality traits of participants. The present work contributes by extending HRI knowledge concerning human teachers’ understandings of robots, in a specific teaching scenario when teachers are observing behaviours that have the primary goal of accomplishing a physical task
    corecore