11 research outputs found

    A Study on Visual Focus of Attention Recognition from Head Pose in a Meeting Room

    Get PDF
    This paper presents a study on the recognition of the visual focus of attention (VFOA) of meeting participants based on their head pose. Contrarily to previous studies on the topic, in our set-up, the potential VFOA of people is not restricted to other meeting participants only, but includes environmental targets (table, slide screen). This has two consequences. Firstly, this increases the number of possible ambiguities in identifying the VFOA from the head pose. Secondly, due to our particular set-up, the identification of the VFOA from head pose can not rely on an incomplete representation of the pose (the pan), but requests the knowledge of the full head pointing information (pan and tilt). In this paper, using a corpus of 8 meetings of 8 minutes on average, featuring 4 persons involved in the discussion of statements projected on a slide screen, we analyze the above issues by evaluating, through numerical performance measures, the recognition of the VFOA from head pose information obtained either using a magnetic sensor device (the ground truth) or a vision based tracking system (head pose estimates). The results clearly show that in complex but realistic situations, it is quite optimistic to believe that the recognition of the VFOA can solely be based on the head pose, as some previous studies had suggested

    Real-time gaze estimation using a Kinect and a HD webcam

    Get PDF
    In human-computer interaction, gaze orientation is an important and promising source of information to demonstrate the attention and focus of users. Gaze detection can also be an extremely useful metric for analysing human mood and affect. Furthermore, gaze can be used as an input method for human-computer interaction. However, currently real-time and accurate gaze estimation is still an open problem. In this paper, we propose a simple and novel estimation model of the real-time gaze direction of a user on a computer screen. This method utilises cheap capturing devices, a HD webcam and a Microsoft Kinect. We consider that the gaze motion from a user facing forwards is composed of the local gaze motion shifted by eye motion and the global gaze motion driven by face motion. We validate our proposed model of gaze estimation and provide experimental evaluation of the reliability and the precision of the method

    Interference between gestures and words

    No full text
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN005697 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Actions Speak No Louder Than Words: Symmetrical Cross-Modal Interference Effects in the Processing of Verbal and Gestural Information

    Get PDF
    Five experiments are reported that investigate the distribution of selective attention to verbal and nonverbal components of an utterance when conflicting information exists in these channels. A Stroop-type interference paradigm is adopted in which attributes from the verbal and nonverbal dimensions are placed into conflict. Static directional (deictic) gestures and corresponding spoken and written words show symmetrical interference (Experiments 1, 2, and 3), as do directional arrows and spoken words (Experiment 4). This symmetry is maintained when the task is switched from a manual keypress to a verbal naming response (Experiment 5), suggesting the mutual influence of the 2 dimensions is independent of spatial stimulus-response compatibility. It is concluded that the results are consistent with a model of interference in which information from pointing gestures and speech is integrated prior to the response selection stage of processing

    How do eye-gaze and facial expression interact?

    No full text
    Previous research has demonstrated an interaction between eye gaze and selected facial emotional expressions, whereby the perception of anger and happiness is impaired when the eyes are horizontally averted within a face, but the perception of fear and sadness is enhanced under the same conditions. The current study reexamined these claims over six experiments. In the first three experiments, the categorization of happy and sad expressions (Experiments 1 and 2) and angry and fearful expressions (Experiment 3) was impaired when eye gaze was averted, in comparison to direct gaze conditions. Experiment 4 replicated these findings in a rating task, which combined all four expressions within the same design. Experiments 5 and 6 then showed that previous findings, that the perception of selected expressions is enhanced under averted gaze, are stimulus and task-bound. The results are discussed in relation to research on facial expression processing and visual attention

    Visual Focus of Attention Recognition in the Ambient Kitchen

    No full text
    corecore