980 research outputs found

    Where To Look? Automating Attending Behaviors of Virtual Human Characters

    Get PDF
    This research proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from psychology, human factors and computer vision. Deliberate behaviors, the analogs of scanpaths in visual psychology, compete with involuntary attention capture and lapses into idling or free viewing. Insights provided by implementing this framework are: a defined set of parameters that impact the observable effects of attention, a defined vocabulary of looking behaviors for certain motor and cognitive activity, a defined hierarchy of three levels of eye behavior (endogenous, exogenous and idling) and a proposed method of how these types interact

    Visual Attention and Eye Gaze During Multiparty Conversations with Distractions

    Get PDF
    Our objective is to develop a computational model to predict visual attention behavior for an embodied conversational agent. During interpersonal interaction, gaze provides signal feedback and directs conversation flow. Simultaneously, in a dynamic environment, gaze also directs attention to peripheral movements. An embodied conversational agent should therefore employ social gaze not only for interpersonal interaction but also to possess human attention attributes so that its eyes and facial expression portray and convey appropriate distraction and engagement behaviors

    Experimenting with the Gaze of a Conversational Agent

    Get PDF
    We have carried out a pilot experiment to investigate the effects of different eye gaze behaviors of a cartoon-like talking face on the quality of human-agent dialogues. We compared a version of the talking face that roughly implements some patterns of humanlike behavior with two other versions. We called this the optimal version. In one of the other versions the shifts in gaze were kept minimal and in the other version the shifts would occur randomly. The talking face has a number of restrictions. There is no speech recognition, so questions and replies have to\ud be typed in by the users of the systems. Despite this restriction we found that participants that conversed with the optimal agent appreciated the agent more than participants that conversed with the other agents. Conversations with the optimal version proceeded more efficiently. Participants needed less time to complete their task

    Controlling the Gaze of Conversational Agents

    Get PDF
    We report on a pilot experiment that investigated the effects of different eye gaze behaviours of a cartoon-like talking face on the quality of human-agent dialogues. We compared a version of the talking face that roughly implements some patterns of human-like behaviour with\ud two other versions. In one of the other versions the shifts in gaze were kept minimal and in the other version the shifts would occur randomly. The talking face has a number of restrictions. There is no speech recognition, so questions and replies have to be typed in by the users\ud of the systems. Despite this restriction we found that participants that conversed with the agent that behaved according to the human-like patterns appreciated the agent better than participants that conversed with the other agents. Conversations with the optimal version also\ud proceeded more efficiently. Participants needed less time to complete their task

    Do You See What Eyes See? Implementing Inattentional Blindness

    Get PDF
    This paper presents a computational model of visual attention incorporating a cognitive imperfection known as inattentional blindness. We begin by presenting four factors that determine successful attention allocation: conspicuity, mental workload, expectation and capacity. We then propose a framework to study the effects of those factors on an unexpected object and conduct an experiment to measure the corresponding subjective awareness level. Finally, we discuss the application of a visual attention model for conversational agents

    Semi-Autonomous Avatars: A New Direction for Expressive User Embodiment

    Get PDF
    Computer animated characters are rapidly becoming a regular part of our lives. They are starting to take the place of actors in films and television and are now an integral part of most computer games. Perhaps most interestingly in on-line games and chat rooms they are representing the user visually in the form of avatars, becoming our on-line identities, our embodiments in a virtual world. Currently online environments such as “Second Life” are being taken up by people who would not traditionally have considered playing games before, largely due to a greater emphasis on social interaction. These environments require avatars that are more expressive and that can make on-line social interactions seem more like face-to-face conversations. Computer animated characters come in many different forms. Film characters require a substantial amount of off-line animator effort to achieve high levels of quality; these techniques are not suitable for real time applications and are not the focus of this chapter. Non-player characters (typically the bad guys) in games use limited artificial intelligence to react autonomously to events in real time. However avatars are completely controlled by their users, reacting to events solely through user commands. This chapter will discuss the distinction between fully autonomous characters and completely controlled avatars and how the current differentiation may no longer be useful, given that avatar technology may need to include more autonomy to live up to the demands of mass appeal. We will firstly discuss the two categories and present reasons to combine them. We will then describe previous work in this area and finally present our own framework for semi-autonomous avatars

    Eye Movements and Attention for Behavioural Animation

    Get PDF
    This paper describes a simulation of attention behaviour aimed at computer-animated characters. Attention is the focusing of a person’s perception on a particular object. This is useful for computer animation as it determines which objects the character is aware of: information that can be used in the simulation of the character’s behaviour in order to automatically animate the character. The simulation of attention also determines where the character is looking and so is used to produce gaze behaviou

    Semi-Autonomous Avatars and Characters

    Get PDF
    corecore