3 research outputs found

    Investigating the midline effect for visual focus of attention recognition

    Get PDF
    This paper addresses the recognition of people’s visual focus of attention (VFOA), the discrete version of gaze indicating who is looking at whom or what. In absence of high def- inition images, we rely on people’s head pose to recognize the VFOA. To the contrary of most previous works that assumed a fixed mapping between head pose directions and gaze target directions, we investigate novel gaze models doc- umented in psychovision that produce a dynamic (temporal) mapping between them. This mapping accounts for two im- portant factors affecting the head and gaze relationship: the shoulder orientation defining the gaze midline of a person varies over time; and gaze shifts from frontal to the side in- volve different head rotations than the reverse. Evaluated on a public dataset and on data recorded with the humanoid robot Nao, the method exhibit better adaptivity often pro- ducing better performance than state-of-the-art approach

    Camera-based estimation of student's attention in class

    Get PDF
    Two essential elements of classroom lecturing are the teacher and the students. This human core can easily be lost in the overwhelming list of technological supplements aimed at improving the teaching/learning experience. We start from the question of whether we can formulate a technological intervention around the human connection, and find indicators which would tell us when the teacher is not reaching the audience. Our approach is based on principles of unobtrusive measurements and social signal processing. Our assumption is that students with different levels of attention will display different non-verbal behaviour during the lecture. Inspired by information theory, we formulated a theoretical background for our assumptions around the idea of synchronization between the sender and receiver, and between several receivers focused on the same sender. Based on this foundation we present a novel set of behaviour metrics as the main contribution. By using a camera-based system to observe lectures, we recorded an extensive dataset in order to verify our assumptions. In our first study on motion, we found that differences in attention are manifested on the level of audience movement synchronization. We formulated the measure of ``motion lag'' based on the idea that attentive students would have a common behaviour pattern. For our second set of metrics we explored ways to substitute intrusive eye-tracking equipment in order to record gaze information of the entire audience. To achieve this we conducted an experiment on the relationship between head orientation and gaze direction. Based on acquired results we formulated an improved model of gaze uncertainty than the ones currently used in similar studies. In combination with improvements on head detection and pose estimation, we extracted measures of audience head and gaze behaviour from our remote recording system. From the collected data we found that synchronization between student's head orientation and teacher's motion serves as a reliable indicator of the attentiveness of students. To illustrate the predictive power of our features, a supervised-learning model was trained achieving satisfactory results at predicting student's attention
    corecore