70,513 research outputs found

    Controlling the Gaze of Conversational Agents

    Get PDF
    We report on a pilot experiment that investigated the effects of different eye gaze behaviours of a cartoon-like talking face on the quality of human-agent dialogues. We compared a version of the talking face that roughly implements some patterns of human-like behaviour with\ud two other versions. In one of the other versions the shifts in gaze were kept minimal and in the other version the shifts would occur randomly. The talking face has a number of restrictions. There is no speech recognition, so questions and replies have to be typed in by the users\ud of the systems. Despite this restriction we found that participants that conversed with the agent that behaved according to the human-like patterns appreciated the agent better than participants that conversed with the other agents. Conversations with the optimal version also\ud proceeded more efficiently. Participants needed less time to complete their task

    A single case study of a family-centred intervention with a young girl with cerebral palsy who is a multimodal communicator

    Get PDF
    Background - This paper describes the impact of a family-centred intervention that used video to enhance communication in a young girl with cerebral palsy. This single case study describes how the video-based intervention worked in the context of multimodal communication, which included high-tech augmentative and alternative communication (AAC) device use. This paper includes the family's perspective of the video intervention and they describe the impact of it on their family. Methods - This single case study was based on the premise that the video interaction guidance intervention would increase attentiveness between participants during communication. It tests a hypothesis that eye gaze is a fundamental prerequisite for all communicative initiatives, regardless of modality in the child. Multimodality is described as the range of communicative behaviours used by the child and these are coded as AAC communication, vocalizations (intelligible and unintelligible), sign communication, nodding and pointing. Change was analysed over time with multiple testing both pre and post intervention. Data were analysed within INTERACT, a computer software to analyse behaviourally observed data. Behaviours were analysed for frequency and duration, contingency and co-occurrence. Results - Results indicated increased duration of mother's and girl's eye gaze, increased frequency and duration in AAC communication by the girl and significant change in frequency [χ2 (5, n = 1) = 13.25, P < 0.05] and duration [χ2 (5, n = 1) = 12.57, P < 0.05] of the girl's multimodal communicative behaviours. Contingency and co-occurrence analysis indicated that mother's eye gaze followed by AAC communication was the most prominent change between the pre- and post-intervention assessments. Conclusions - There was a trend for increased eye gaze in both mum and girl and AAC communication in the girl following the video intervention. The family's perspective concurs with the results

    Explorations in engagement for humans and robots

    Get PDF
    This paper explores the concept of engagement, the process by which individuals in an interaction start, maintain and end their perceived connection to one another. The paper reports on one aspect of engagement among human interactors--the effect of tracking faces during an interaction. It also describes the architecture of a robot that can participate in conversational, collaborative interactions with engagement gestures. Finally, the paper reports on findings of experiments with human participants who interacted with a robot when it either performed or did not perform engagement gestures. Results of the human-robot studies indicate that people become engaged with robots: they direct their attention to the robot more often in interactions where engagement gestures are present, and they find interactions more appropriate when engagement gestures are present than when they are not.Comment: 31 pages, 5 figures, 3 table

    Analyzing Nonverbal Listener Responses using Parallel Recordings of Multiple Listeners

    Get PDF
    In this paper we study nonverbal listener responses on a corpus with multiple parallel recorded listeners. These listeners were meant to believe that they were the sole listener, while in fact there were three persons listening to the same speaker. The speaker could only see one of the listeners. We analyze the impact of the particular setup of the corpus on the behavior and perception of the two types of listeners; the listeners that could be seen by the speaker and the listeners that could not be seen. Furthermore we compare the nonverbal listening behaviors of these three listeners to each other with regard to timing and form. We correlate these behaviors with behaviors of the speaker, like pauses and whether the speaker is looking at the listeners or not

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Gaze Behavior, Believability, Likability and the iCat

    Get PDF
    The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This paper summarizes our research whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour serves several functions during social interaction such as mediating conversation flow, communicating emotional information and avoiding distraction by restricting visual input. There are several types of eye and head movements that are necessary for realizing these functions. We designed and evaluated a gaze behaviour system for the iCat robot that implements realistic models of the major types of eye and head movements found in living beings: vergence, vestibulo ocular reflexive, smooth pursuit movements and gaze shifts. We discuss how these models are integrated into the software environment of the iCat and can be used to create complex interaction scenarios. We report about some user tests and draw conclusions for future evaluation scenarios
    corecore