465 research outputs found

    Methodology and themes of human-robot interaction: a growing research field

    Get PDF
    Original article can be found at: http://www.intechweb.org/journal.php?id=3 Distributed under the Creative Commons Attribution License. Users are free to read, print, download and use the content or part of it so long as the original author(s) and source are correctly credited.This article discusses challenges of Human-Robot Interaction, which is a highly inter- and multidisciplinary area. Themes that are important in current research in this lively and growing field are identified and selected work relevant to these themes is discussed.Peer reviewe

    Humanoid-based protocols to study social cognition

    Get PDF
    Social cognition is broadly defined as the way humans understand and process their interactions with other humans. In recent years, humans have become more and more used to interact with non-human agents, such as technological artifacts. Although these interactions have been restricted to human-controlled artifacts, they will soon include interactions with embodied and autonomous mechanical agents, i.e., robots. This challenge has motivated an area of research related to the investigation of human reactions towards robots, widely referred to as Human-Robot Interaction (HRI). Classical HRI protocols often rely on explicit measures, e.g., subjective reports. Therefore, they cannot address the quantification of the crucial implicit social cognitive processes that are evoked during an interaction. This thesis aims to develop a link between cognitive neuroscience and human-robot interaction (HRI) to study social cognition. This approach overcomes methodological constraints of both fields, allowing to trigger and capture the mechanisms of real-life social interactions while ensuring high experimental control. The present PhD work demonstrates this through the systematic study of the effect of online eye contact on gaze-mediated orienting of attention. The study presented in Publication I aims to adapt the gaze-cueing paradigm from cognitive science to an objective neuroscientific HRI protocol. Furthermore, it investigates whether the gaze-mediated orienting of attention is sensitive to the establishment of eye contact. The study replicates classic screen-based findings of attentional orienting mediated by gaze both at behavioral and neural levels, highlighting the feasibility and the scientific value of adding neuroscientific methods to HRI protocols. The aim of the study presented in Publication II is to examine whether and how real-time eye contact affects the dual-component model of joint attention orienting. To this end, cue validity and stimulus-to-onset asynchrony are also manipulated. The results show an interactive effect of strategic (cue validity) and social (eye contact) top-down components on the botton-up reflexive component of gaze-mediated orienting of attention. The study presented in Publication III aims to examine the subjective engagement and attribution of human likeness towards the robot depending on established eye contact or not during a joint attention task. Subjective reports show that eye contact increases human likeness attribution and feelings of engagement with the robot compared to a no-eye contact condition. The aim of the study presented in Publication IV is to investigate whether eye contact established by a humanoid robot affects objective measures of engagement (i.e. joint attention and fixation durations), and subjective feelings of engagement with the robot during a joint attention task. Results show that eye contact modulates attentional engagement, with longer fixations at the robot’s face and cueing effect when the robot establishes eye contact. In contrast, subjective reports show that the feeling of being engaged with the robot in an HRI protocol is not modulated by real-time eye contact. This study further supports the necessity for adding objective methods to HRI. Overall, this PhD work shows that embodied artificial agents can advance the theoretical knowledge of social cognitive mechanisms by serving as sophisticated interactive stimuli of high ecological validity and excellent experimental control. Moreover, humanoid-based protocols grounded in cognitive science can advance the HRI community by informing about the exact cognitive mechanisms that are present during HRI

    Humans Can’t Resist Robot Eyes – Reflexive Cueing With Pseudo-Social Stimuli

    Get PDF
    Joint attention is a key mechanism for humans to coordinate their social behavior. Whether and how this mechanism can benefit the interaction with pseudo-social partners such as robots is not well understood. To investigate the potential use of robot eyes as pseudo-social cues that ease attentional shifts we conducted an online study using a modified spatial cueing paradigm. The cue was either a non-social (arrow), a pseudo-social (two versions of an abstract robot eye), or a social stimulus (photographed human eyes) that was presented either paired (e.g. two eyes) or single (e.g. one eye). The latter was varied to separate two assumed triggers of joint attention: the social nature of the stimulus, and the additional spatial information that is conveyed only by paired stimuli. Results support the assumption that pseudo-social stimuli, in our case abstract robot eyes, have the potential to facilitate human-robot interaction as they trigger reflexive cueing. To our surprise, actual social cues did not evoke reflexive shifts in attention. We suspect that the robot eyes elicited the desired effects because they were human-like enough while at the same time being much easier to perceive than human eyes, due to a design with strong contrasts and clean lines. Moreover, results indicate that for reflexive cueing it does not seem to make a difference if the stimulus is presented single or paired. This might be a first indicator that joint attention depends rather on the stimulus’ social nature or familiarity than its spatial expressiveness. Overall, the study suggests that using paired abstract robot eyes might be a good design practice for fostering a positive perception of a robot and to facilitate joint attention as a precursor for coordinated behavior.Peer Reviewe

    Confirmation Report: Modelling Interlocutor Confusion in Situated Human Robot Interaction

    Get PDF
    Human-Robot Interaction (HRI) is an important but challenging field focused on improving the interaction between humans and robots such to make the interaction more intelligent and effective. However, building a natural conversational HRI is an interdisciplinary challenge for scholars, engineers, and designers. It is generally assumed that the pinnacle of human- robot interaction will be having fluid naturalistic conversational interaction that in important ways mimics that of how humans interact with each other. This of course is challenging at a number of levels, and in particular there are considerable difficulties when it comes to naturally monitoring and responding to the user’s mental state. On the topic of mental states, one field that has received little attention to date is moni- toring the user for possible confusion states. Confusion is a non-trivial mental state which can be seen as having at least two substates. There two confusion states can be thought of as being associated with either negative or positive emotions. In the former, when people are productively confused, they have a passion to solve any current difficulties. Meanwhile, people who are in unproductive confusion may lose their engagement and motivation to overcome those difficulties, which in turn may even lead them to drop the current conversation. While there has been some research on confusion monitoring and detection, it has been limited with the most focused on evaluating confusion states in online learning tasks. The central hypothesis of this research is that the monitoring and detection of confusion states in users is essential to fluid task-centric HRI and that it should be possible to detect such confusion and adjust policies to mitigate the confusion in users. In this report, I expand on this hypothesis and set out several research questions. I also provide a comprehensive literature review before outlining work done to date towards my research hypothesis, I also set out plans for future experimental work

    From social brains to social robots: applying neurocognitive insights to human-robot interaction

    Get PDF
    Amidst the fourth industrial revolution, social robots are resolutely moving from fiction to reality. With sophisticated artificial agents becoming ever more ubiquitous in daily life, researchers across different fields are grappling with the questions concerning how humans perceive and interact with these agents and the extent to which the human brain incorporates intelligent machines into our social milieu. This theme issue surveys and discusses the latest findings, current challenges and future directions in neuroscience- and psychology-inspired human–robot interaction (HRI). Critical questions are explored from a transdisciplinary perspective centred around four core topics in HRI: technical solutions for HRI, development and learning for HRI, robots as a tool to study social cognition, and moral and ethical implications of HRI. Integrating findings from diverse but complementary research fields, including social and cognitive neurosciences, psychology, artificial intelligence and robotics, the contributions showcase ways in which research from disciplines spanning biological sciences, social sciences and technology deepen our understanding of the potential and limits of robotic agents in human social life

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Automatic Context-Driven Inference of Engagement in HMI: A Survey

    Full text link
    An integral part of seamless human-human communication is engagement, the process by which two or more participants establish, maintain, and end their perceived connection. Therefore, to develop successful human-centered human-machine interaction applications, automatic engagement inference is one of the tasks required to achieve engaging interactions between humans and machines, and to make machines attuned to their users, hence enhancing user satisfaction and technology acceptance. Several factors contribute to engagement state inference, which include the interaction context and interactants' behaviours and identity. Indeed, engagement is a multi-faceted and multi-modal construct that requires high accuracy in the analysis and interpretation of contextual, verbal and non-verbal cues. Thus, the development of an automated and intelligent system that accomplishes this task has been proven to be challenging so far. This paper presents a comprehensive survey on previous work in engagement inference for human-machine interaction, entailing interdisciplinary definition, engagement components and factors, publicly available datasets, ground truth assessment, and most commonly used features and methods, serving as a guide for the development of future human-machine interaction interfaces with reliable context-aware engagement inference capability. An in-depth review across embodied and disembodied interaction modes, and an emphasis on the interaction context of which engagement perception modules are integrated sets apart the presented survey from existing surveys
    • …
    corecore