10 research outputs found

    Visual Attention and Eye Gaze During Multiparty Conversations with Distractions

    Get PDF
    Our objective is to develop a computational model to predict visual attention behavior for an embodied conversational agent. During interpersonal interaction, gaze provides signal feedback and directs conversation flow. Simultaneously, in a dynamic environment, gaze also directs attention to peripheral movements. An embodied conversational agent should therefore employ social gaze not only for interpersonal interaction but also to possess human attention attributes so that its eyes and facial expression portray and convey appropriate distraction and engagement behaviors

    Believing in BERT:Using expressive communication to enhance trust and counteract operational error in physical Human-robot interaction

    Get PDF
    Strategies are necessary to mitigate the impact of unexpected behavior in collaborative robotics, and research to develop solutions is lacking. Our aim here was to explore the benefits of an affective interaction, as opposed to a more efficient, less error prone but non-communicative one. The experiment took the form of an omelet-making task, with a wide range of participants interacting directly with BERT2, a humanoid robot assistant. Having significant implications for design, results suggest that efficiency is not the most important aspect of performance for users; a personable, expressive robot was found to be preferable over a more efficient one, despite a considerable trade off in time taken to perform the task. Our findings also suggest that a robot exhibiting human-like characteristics may make users reluctant to ‘hurt its feelings’; they may even lie in order to avoid this

    A Pilot Study with a Novel Setup for Collaborative Play of the Humanoid Robot KASPAR with children with autism

    Get PDF
    This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.This article describes a pilot study in which a novel experimental setup, involving an autonomous humanoid robot, KASPAR, participating in a collaborative, dyadic video game, was implemented and tested with children with autism, all of whom had impairments in playing socially and communicating with others. The children alternated between playing the collaborative video game with a neurotypical adult and playing the same game with the humanoid robot, being exposed to each condition twice. The equipment and experimental setup were designed to observe whether the children would engage in more collaborative behaviours while playing the video game and interacting with the adult than performing the same activities with the humanoid robot. The article describes the development of the experimental setup and its first evaluation in a small-scale exploratory pilot study. The purpose of the study was to gain experience with the operational limits of the robot as well as the dyadic video game, to determine what changes should be made to the systems, and to gain experience with analyzing the data from this study in order to conduct a more extensive evaluation in the future. Based on our observations of the childrens’ experiences in playing the cooperative game, we determined that while the children enjoyed both playing the game and interacting with the robot, the game should be made simpler to play as well as more explicitly collaborative in its mechanics. Also, the robot should be more explicit in its speech as well as more structured in its interactions. Results show that the children found the activity to be more entertaining, appeared more engaged in playing, and displayed better collaborative behaviours with their partners (For the purposes of this article, ‘partner’ refers to the human/robotic agent which interacts with the children with autism. We are not using the term’s other meanings that refer to specific relationships or emotional involvement between two individuals.) in the second sessions of playing with human adults than during their first sessions. One way of explaining these findings is that the children’s intermediary play session with the humanoid robot impacted their subsequent play session with the human adult. However, another longer and more thorough study would have to be conducted in order to better re-interpret these findings. Furthermore, although the children with autism were more interested in and entertained by the robotic partner, the children showed more examples of collaborative play and cooperation while playing with the human adult.Peer reviewe

    Multi-Sensors Engagement Detection with a Robot Companion in a Home Environment

    Get PDF
    Workshop FW1 "Assistance and Service Robotics in a Human Environment" - Session3: Behavioral modeling and Human/Robot InteractionInternational audienceRecognition of intentions is an unconscious cognitive process vital to human communication. This skill enables anticipation and increases interactive exchanges quality between humans. Within the context of engagement, i.e. intention for interaction, non-verbal signals are used to communicate this intention to the partner. In this paper, we investigated methods to detect these signals in order to allow a robot to know when it is about to be addressed. Classically, the human position and speed, the human-robot distance are used to detect the engagement. Our hypothesis is that this method is not enough in a context of a home environment. The chosen approach integrates multimodal features gathered using a robot enhanced with a Kinect. The evaluation of this new method of detection on our corpus collected in spontaneous conditions highlights its robustness and validates use of such technique in real environment. Experimental validation shows that the use of multimodal sensors gives better precision and recall than the detector using only spatial and speed features. We also demonstrate that 7 multimodal features are sufficient to provide a good engagement detection score

    Seeking Attention: Testing a Model of Initiating Service Interactions

    Get PDF
    Loth S, Huth K, de Ruiter J. Seeking Attention: Testing a Model of Initiating Service Interactions. In: HernĂĄndez-LĂłpez M de la O, FernĂĄndez Amaya L, eds. A Multidisciplinary Approach to Service Encounters. Studies in Pragmatics. Vol 14. Amsterdam: Brill; 2015: 229-247

    Robot navigation in dense human crowds: Statistical models and experimental studies of human–robot cooperation

    Get PDF
    We consider the problem of navigating a mobile robot through dense human crowds. We begin by exploring a fundamental impediment to classical motion planning algorithms called the “freezing robot problem”: once the environment surpasses a certain level of dynamic complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. We argue that this problem can be avoided if the robot anticipates human cooperation, and accordingly we develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a “multiple goal” extension that models the goal-driven nature of human decision making. We validate this model with an empirical study of robot navigation in dense human crowds (488 runs), specifically testing how cooperation models effect navigation performance. The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 0.8 humans/m^2, while a state-of-the-art non-cooperative planner exhibits unsafe behavior more than three times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m^2. We also show that our non-cooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds

    On the impact of different types of errors on trust in human-robot interaction: Are laboratory-based HRI experiments trustworthy?

    Get PDF
    © John Benjamins Publishing Company Trust is a key dimension of human-robot interaction (HRI), and has often been studied in the HRI community. A common challenge arises from the difficulty of assessing trust levels in ecologically invalid environments: we present in this paper two independent laboratory studies, totalling 160 participants, where we investigate the impact of different types of errors on resulting trust, using both behavioural and subjective measures of trust. While we found a (weak) general effect of errors on reported and observed level of trust, no significant differences between the type of errors were found in either of our studies. We discuss this negative result in light of our experimental protocols, and argue for the community to move towards alternative methodologies to assess trust

    Automatic Context-Driven Inference of Engagement in HMI: A Survey

    Full text link
    An integral part of seamless human-human communication is engagement, the process by which two or more participants establish, maintain, and end their perceived connection. Therefore, to develop successful human-centered human-machine interaction applications, automatic engagement inference is one of the tasks required to achieve engaging interactions between humans and machines, and to make machines attuned to their users, hence enhancing user satisfaction and technology acceptance. Several factors contribute to engagement state inference, which include the interaction context and interactants' behaviours and identity. Indeed, engagement is a multi-faceted and multi-modal construct that requires high accuracy in the analysis and interpretation of contextual, verbal and non-verbal cues. Thus, the development of an automated and intelligent system that accomplishes this task has been proven to be challenging so far. This paper presents a comprehensive survey on previous work in engagement inference for human-machine interaction, entailing interdisciplinary definition, engagement components and factors, publicly available datasets, ground truth assessment, and most commonly used features and methods, serving as a guide for the development of future human-machine interaction interfaces with reliable context-aware engagement inference capability. An in-depth review across embodied and disembodied interaction modes, and an emphasis on the interaction context of which engagement perception modules are integrated sets apart the presented survey from existing surveys
    corecore