144 research outputs found

    Visual Attention and Eye Gaze During Multiparty Conversations with Distractions

    Get PDF
    Our objective is to develop a computational model to predict visual attention behavior for an embodied conversational agent. During interpersonal interaction, gaze provides signal feedback and directs conversation flow. Simultaneously, in a dynamic environment, gaze also directs attention to peripheral movements. An embodied conversational agent should therefore employ social gaze not only for interpersonal interaction but also to possess human attention attributes so that its eyes and facial expression portray and convey appropriate distraction and engagement behaviors

    Look together: using gaze for assisting co-located collaborative search

    Get PDF
    Gaze information provides indication of users focus which complements remote collaboration tasks, as distant users can see their partner’s focus. In this paper, we apply gaze for co-located collaboration, where users’ gaze locations are presented on the same display, to help collaboration between partners. We integrated various types of gaze indicators on the user interface of a collaborative search system, and we conducted two user studies to understand how gaze enhances coordination and communication between co-located users. Our results show that gaze indeed enhances co-located collaboration, but with a trade-off between visibility of gaze indicators and user distraction. Users acknowledged that seeing gaze indicators eases communication, because it let them be aware of their partner’s interests and attention. However, users can be reluctant to share their gaze information due to trust and privacy, as gaze potentially divulges their interests

    Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems

    Get PDF
    International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics

    Eye tracking and avatar-mediated communication in immersive collaborative virtual environments

    Get PDF
    The research presented in this thesis concerns the use of eye tracking to both enhance and understand avatar-mediated communication (AMC) performed by users of immersive collaborative virtual environment (ICVE) systems. AMC, in which users are embodied by graphical humanoids within a shared virtual environment (VE), is rapidly emerging as a prevalent and popular form of remote interaction. However, compared with video-mediated communication (VMC), which transmits interactants’ actual appearance and behaviour, AMC fails to capture, transmit, and display many channels of nonverbal communication (NVC). This is a significant hindrance to the medium’s ability to support rich interpersonal telecommunication. In particular, oculesics (the communicative properties of the eyes), including gaze, blinking, and pupil dilation, are central nonverbal cues during unmediated social interaction. This research explores the interactive and analytical application of eye tracking to drive the oculesic animation of avatars during real-time communication, and as the primary method of experimental data collection and analysis, respectively. Three distinct but interrelated questions are addressed. First, the thesis considers the degree to which quality of communication may be improved through the use of eye tracking, to increase the nonverbal, oculesic, information transmitted during AMC. Second, the research asks whether users engaged in AMC behave and respond in a socially realistic manner in comparison with VMC. Finally, the degree to which behavioural simulations of oculesics can both enhance the realism of virtual humanoids, and complement tracked behaviour in AMC, is considered. These research questions were investigated over a series of telecommunication experiments investigating scenarios common to computer supported cooperative work (CSCW), and a further series of experiments investigating behavioural modelling for virtual humanoids. The first, exploratory, telecommunication experiment compared AMC with VMC in a three-party conversational scenario. Results indicated that users employ gaze similarly when faced with avatar and video representations of fellow interactants, and demonstrated how interaction is influenced by the technical characteristics and limitations of a medium. The second telecommunication experiment investigated the impact of varying methods of avatar gaze control on quality of communication during object-focused multiparty AMC. The main finding of the experiment was that quality of communication is reduced when avatars demonstrate misleading gaze behaviour. The final telecommunication study investigated truthful and deceptive dyadic interaction in AMC and VMC over two closely-related experiments. Results from the first experiment indicated that users demonstrate similar oculesic behaviour and response in both AMC and VMC, but that psychological arousal is greater following video-based interaction. Results from the second experiment found that the use of eye tracking to drive the oculesic behaviour of avatars during AMC increased the richness of NVC to the extent that more accurate estimation of embodied users’ states of veracity was enabled. Rather than directly investigating AMC, the second series of experiments addressed behavioural modelling of oculesics for virtual humanoids. Results from the these experiments indicated that oculesic characteristics are highly influential to the perceived realism of virtual humanoids, and that behavioural models are able to complement the use of eye tracking in AMC. The research presented in this thesis explores AMC and eye tracking over a range of collaborative and perceptual studies. The overall conclusion is that eye tracking is able to enhance AMC towards a richer medium for interpersonal telecommunication, and that users’ behaviour in AMC is no less socially ‘real’ than that demonstrated in VMC. However, there are distinct differences between the two communication mediums, and the importance of matching the characteristics of a planned communication with those of the medium itself is critical

    Sensing, interpreting, and anticipating human social behaviour in the real world

    Get PDF
    Low-level nonverbal social signals like glances, utterances, facial expressions and body language are central to human communicative situations and have been shown to be connected to important high-level constructs, such as emotions, turn-taking, rapport, or leadership. A prerequisite for the creation of social machines that are able to support humans in e.g. education, psychotherapy, or human resources is the ability to automatically sense, interpret, and anticipate human nonverbal behaviour. While promising results have been shown in controlled settings, automatically analysing unconstrained situations, e.g. in daily-life settings, remains challenging. Furthermore, anticipation of nonverbal behaviour in social situations is still largely unexplored. The goal of this thesis is to move closer to the vision of social machines in the real world. It makes fundamental contributions along the three dimensions of sensing, interpreting and anticipating nonverbal behaviour in social interactions. First, robust recognition of low-level nonverbal behaviour lays the groundwork for all further analysis steps. Advancing human visual behaviour sensing is especially relevant as the current state of the art is still not satisfactory in many daily-life situations. While many social interactions take place in groups, current methods for unsupervised eye contact detection can only handle dyadic interactions. We propose a novel unsupervised method for multi-person eye contact detection by exploiting the connection between gaze and speaking turns. Furthermore, we make use of mobile device engagement to address the problem of calibration drift that occurs in daily-life usage of mobile eye trackers. Second, we improve the interpretation of social signals in terms of higher level social behaviours. In particular, we propose the first dataset and method for emotion recognition from bodily expressions of freely moving, unaugmented dyads. Furthermore, we are the first to study low rapport detection in group interactions, as well as investigating a cross-dataset evaluation setting for the emergent leadership detection task. Third, human visual behaviour is special because it functions as a social signal and also determines what a person is seeing at a given moment in time. Being able to anticipate human gaze opens up the possibility for machines to more seamlessly share attention with humans, or to intervene in a timely manner if humans are about to overlook important aspects of the environment. We are the first to propose methods for the anticipation of eye contact in dyadic conversations, as well as in the context of mobile device interactions during daily life, thereby paving the way for interfaces that are able to proactively intervene and support interacting humans.Blick, GesichtsausdrĂŒcke, Körpersprache, oder Prosodie spielen als nonverbale Signale eine zentrale Rolle in menschlicher Kommunikation. Sie wurden durch vielzĂ€hlige Studien mit wichtigen Konzepten wie Emotionen, Sprecherwechsel, FĂŒhrung, oder der QualitĂ€t des VerhĂ€ltnisses zwischen zwei Personen in Verbindung gebracht. Damit Menschen effektiv wĂ€hrend ihres tĂ€glichen sozialen Lebens von Maschinen unterstĂŒtzt werden können, sind automatische Methoden zur Erkennung, Interpretation, und Antizipation von nonverbalem Verhalten notwendig. Obwohl die bisherige Forschung in kontrollierten Studien zu ermutigenden Ergebnissen gekommen ist, bleibt die automatische Analyse nonverbalen Verhaltens in weniger kontrollierten Situationen eine Herausforderung. DarĂŒber hinaus existieren kaum Untersuchungen zur Antizipation von nonverbalem Verhalten in sozialen Situationen. Das Ziel dieser Arbeit ist, die Vision vom automatischen Verstehen sozialer Situationen ein StĂŒck weit mehr RealitĂ€t werden zu lassen. Diese Arbeit liefert wichtige BeitrĂ€ge zur autmatischen Erkennung menschlichen Blickverhaltens in alltĂ€glichen Situationen. Obwohl viele soziale Interaktionen in Gruppen stattfinden, existieren unĂŒberwachte Methoden zur Augenkontakterkennung bisher lediglich fĂŒr dyadische Interaktionen. Wir stellen einen neuen Ansatz zur Augenkontakterkennung in Gruppen vor, welcher ohne manuelle Annotationen auskommt, indem er sich den statistischen Zusammenhang zwischen Blick- und Sprechverhalten zu Nutze macht. TĂ€gliche AktivitĂ€ten sind eine Herausforderung fĂŒr GerĂ€te zur mobile Augenbewegungsmessung, da Verschiebungen dieser GerĂ€te zur Verschlechterung ihrer Kalibrierung fĂŒhren können. In dieser Arbeit verwenden wir Nutzerverhalten an mobilen EndgerĂ€ten, um den Effekt solcher Verschiebungen zu korrigieren. Neben der Erkennung verbessert diese Arbeit auch die Interpretation sozialer Signale. Wir veröffentlichen den ersten Datensatz sowie die erste Methode zur Emotionserkennung in dyadischen Interaktionen ohne den Einsatz spezialisierter AusrĂŒstung. Außerdem stellen wir die erste Studie zur automatischen Erkennung mangelnder Verbundenheit in Gruppeninteraktionen vor, und fĂŒhren die erste datensatzĂŒbergreifende Evaluierung zur Detektion von sich entwickelndem FĂŒhrungsverhalten durch. Zum Abschluss der Arbeit prĂ€sentieren wir die ersten AnsĂ€tze zur Antizipation von Blickverhalten in sozialen Interaktionen. Blickverhalten hat die besondere Eigenschaft, dass es sowohl als soziales Signal als auch der Ausrichtung der visuellen Wahrnehmung dient. Somit eröffnet die FĂ€higkeit zur Antizipation von Blickverhalten Maschinen die Möglichkeit, sich sowohl nahtloser in soziale Interaktionen einzufĂŒgen, als auch Menschen zu warnen, wenn diese Gefahr laufen wichtige Aspekte der Umgebung zu ĂŒbersehen. Wir prĂ€sentieren Methoden zur Antizipation von Blickverhalten im Kontext der Interaktion mit mobilen EndgerĂ€ten wĂ€hrend tĂ€glicher AktivitĂ€ten, als auch wĂ€hrend dyadischer Interaktionen mittels Videotelefonie

    Seeing Eye to I? The Influence of Self-Video Display Size on Visual Attention and Collaborative Performance in Peer-to-Peer Video Chat

    Full text link
    This thesis examines the influence of self-video size in video chat conversations on visual attention, collaborative performance, grounding, comfort and distraction during a brainstorming task. Twenty pairs of female university students were randomly assigned to either a large or small self-video condition. Two eye tracking systems were used to simultaneously record pairs of participants' gaze across 4 areas-of-interest spanning a 15-minute task. Participants with larger self-video gazed at themselves longer but did not spend a significantly different percentage of the conversation gazing at their partner. Participants sufficiently estimated how long they looked at each other, but significantly overestimated how long they, and their partners, gazed at their own self-video. A majority of participants found their self-video to be comforting, and participants with larger displays found it to be more distracting than those with smaller displays. Over a third of participants would prefer to chat without their self-video visible

    Modelling the Multi in Multi-Party Communication

    Get PDF
    This thesis investigates the effects of multimedia communications technology on the interaction of mixed- and same-role groups. The first study explores the effect of video and audio conferencing on small, role-differentiated problem-solving groups in the laboratory. The second laboratory study examines the impact of shared video technology on the communication of role-undifferentiated groups. A multi-faceted analytical approach is employed, including indices of task performance, process and content of communication, patterns of interaction and subjective user evaluations. Lastly, a field study looks at how the communication process of business meetings is affected by status constraints and audio conferencing technology. The findings show that both multimedia video and audio communications technology have similar impacts on the patterns of speaker contributions in different types and sizes of groups, and that the extent of their effect is influenced by the presence or absence of role differences between group members - whether experimentally manipulated in the laboratory or organisationally assigned roles in a naturalistic setting. Technology-mediation appears to exaggerate the impact of status and role such that group members say more disparate amounts and interact less freely than in face-to-face groups, in particular it exaggerates the dominance of one individual. Surprisingly, multimedia conferencing technology can support free and equal participation in groups whose speakers have similar roles but evidence of its effect on speakers of similar status is equivocal. The implications for communication outcome and design of communications technology are discussed
    • 

    corecore