3,143 research outputs found

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    Combining computer game-based behavioural experiments with high-density EEG and infrared gaze tracking

    Get PDF
    Rigorous, quantitative examination of therapeutic techniques anecdotally reported to have been successful in people with autism who lack communicative speech will help guide basic science toward a more complete characterisation of the cognitive profile in this underserved subpopulation, and show the extent to which theories and results developed with the high-functioning subpopulation may apply. This study examines a novel therapy, the "Rapid Prompting Method" (RPM). RPM is a parent-developed communicative and educational therapy for persons with autism who do not speak or who have difficulty using speech communicatively.The technique aims to develop a means of interactive learning by pointing amongst multiple-choice options presented at different locations in space, with the aid of sensory "prompts" which evoke a response without cueing any specific response option. The prompts are meant to draw and to maintain attention to the communicative task–making the communicative and educational content coincident with the most physically salient, attention-capturing stimulus – and to extinguish the sensory–motor preoccupations with which the prompts compete.ideo-recorded RPM sessions with nine autistic children ages 8–14years who lacked functional communicative speech were coded for behaviours of interest

    Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes

    Full text link
    In this paper we address the problem of multiple camera calibration in the presence of a homogeneous scene, and without the possibility of employing calibration object based methods. The proposed solution exploits salient features present in a larger field of view, but instead of employing active vision we replace the cameras with stereo rigs featuring a long focal analysis camera, as well as a short focal registration camera. Thus, we are able to propose an accurate solution which does not require intrinsic variation models as in the case of zooming cameras. Moreover, the availability of the two views simultaneously in each rig allows for pose re-estimation between rigs as often as necessary. The algorithm has been successfully validated in an indoor setting, as well as on a difficult scene featuring a highly dense pilgrim crowd in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application

    Tracking Gaze and Visual Focus of Attention of People Involved in Social Interaction

    Get PDF
    The visual focus of attention (VFOA) has been recognized as a prominent conversational cue. We are interested in estimating and tracking the VFOAs associated with multi-party social interactions. We note that in this type of situations the participants either look at each other or at an object of interest; therefore their eyes are not always visible. Consequently both gaze and VFOA estimation cannot be based on eye detection and tracking. We propose a method that exploits the correlation between eye gaze and head movements. Both VFOA and gaze are modeled as latent variables in a Bayesian switching state-space model. The proposed formulation leads to a tractable learning procedure and to an efficient algorithm that simultaneously tracks gaze and visual focus. The method is tested and benchmarked using two publicly available datasets that contain typical multi-party human-robot and human-human interactions.Comment: 15 pages, 8 figures, 6 table
    corecore