2,412 research outputs found

    Calibration-Free Eye Gaze Direction Detection with Gaussian Processes

    Get PDF
    In this paper we present a solution for eye gaze detection from a wireless head mounted camera designed for children aged between 6 months and 18 months. Due to the constraints of working with very young children, the system does not seek to be as accurate as other state-of-the-art eye trackers, however it requires no calibration process from the wearer. Gaussian Process Regression and Support Vector Machines are used to analyse the raw pixel data from the video input and return an estimate of the child's gaze direction. A confidence map is used to determine the accuracy the system can expect for each coordinate on the image. The best accuracy so far obtained by the system is 2.34circ^{circ} on adult subjects, tests with children remain to be done

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    3D gaze cursor: continuous calibration and end-point grasp control of robotic actuators

    No full text
    © 2016 IEEE.Eye movements are closely related to motor actions, and hence can be used to infer motor intentions. Additionally, eye movements are in some cases the only means of communication and interaction with the environment for paralysed and impaired patients with severe motor deficiencies. Despite this, eye-tracking technology still has a very limited use as a human-robot control interface and its applicability is highly restricted to 2D simple tasks that operate on screen based interfaces and do not suffice for natural physical interaction with the environment. We propose that decoding the gaze position in 3D space rather than in 2D results into a much richer spatial cursor signal that allows users to perform everyday tasks such as grasping and moving objects via gaze-based robotic teleoperation. Eye tracking in 3D calibration is usually slow - we demonstrate here that by using a full 3D trajectory for system calibration generated by a robotic arm rather than a simple grid of discrete points, gaze calibration in the 3 dimensions can be successfully achieved in short time and with high accuracy. We perform the non-linear regression from eye-image to 3D-end point using Gaussian Process regressors, which allows us to handle uncertainty in end-point estimates gracefully. Our telerobotic system uses a multi-joint robot arm with a gripper and is integrated with our in-house GT3D binocular eye tracker. This prototype system has been evaluated and assessed in a test environment with 7 users, yielding gaze-estimation errors of less than 1cm in the horizontal, vertical and depth dimensions, and less than 2cm in the overall 3D Euclidean space. Users reported intuitive, low-cognitive load, control of the system right from their first trial and were straightaway able to simply look at an object and command through a wink to grasp this object with the robot gripper

    Characterization of slow and fast phase nystagmus

    Get PDF
    A current literature review of the analog and digital process of vestibular and optical kinetic nystagmus reveals little agreement in the methods used by various labs. The strategies for detection of saccade (fast phase velocity component of nystagmus) vary between labs, and most of the process have not been evaluated and validated with a standard database. A survey was made of major vestibular labs in the U.S. that perform computer analyses of vestibular and optokinetic reflexes to stimuli, and a baseline was established from which to standardize data acquisition and analysis programs. The concept of an Error Index was employed as the criterium for evaluating the performance of the vestibular analysis software programs. The performance criterium is based on the detection of saccades and is the average of the percentages of missed detections and false detections. Evaluation of the programs produced results for lateral gaze with saccadic amplitude of one, two, three, five, and ten degrees with various signal-to-noise ratios. In addition, results were obtained for sinusoidal pursuit of 0.05, 0.10, and 0.50 Hz with saccades from one to ten degrees at various signal-to-noise ratios. Selection of the best program was made from the performance in the lateral gaze with three degrees of saccadic amplitude and in the 0.10 Hz sinusoid with three degrees of saccadic amplitude

    Uncertainty visualization of gaze estimation to support operator-controlled calibration

    Get PDF
          In this paper, we investigate how visualization assets can support the qualitative evaluation of gaze estimation uncertainty. Although eye tracking data are commonly available, little has been done to visually investigate the uncertainty of recorded gaze information. This paper tries to fill this gap by using innovative uncertainty computation and visualization. Given a gaze processing pipeline, we estimate the location of this gaze position in the world camera. To do so we developed our own gaze data processing which give us access to every stage of the data transformation and thus the uncertainty computation. To validate our gaze estimation pipeline, we designed an experiment with 12 participants and showed that the correction methods we proposed reduced the Mean Angular Error by about 1.32 cm, aggregating all 12 participants’ results. The Mean Angular Error is 0.25° (SD=0.15°) after correction of the estimated gaze. Next, to support the qualitative assessment of this data, we provide a map which codes the actual uncertainty in the user point of view.

    GazeDPM: Early Integration of Gaze Information in Deformable Part Models

    Full text link
    An increasing number of works explore collaborative human-computer systems in which human gaze is used to enhance computer vision systems. For object detection these efforts were so far restricted to late integration approaches that have inherent limitations, such as increased precision without increase in recall. We propose an early integration approach in a deformable part model, which constitutes a joint formulation over gaze and visual data. We show that our GazeDPM method improves over the state-of-the-art DPM baseline by 4% and a recent method for gaze-supported object detection by 3% on the public POET dataset. Our approach additionally provides introspection of the learnt models, can reveal salient image structures, and allows us to investigate the interplay between gaze attracting and repelling areas, the importance of view-specific models, as well as viewers' personal biases in gaze patterns. We finally study important practical aspects of our approach, such as the impact of using saliency maps instead of real fixations, the impact of the number of fixations, as well as robustness to gaze estimation error

    Gaze-tracking-based interface for robotic chair guidance

    Get PDF
    This research focuses on finding solutions to enhance the quality of life for wheelchair users, specifically by applying a gaze-tracking-based interface for the guidance of a robotized wheelchair. For this purpose, the interface was applied in two different approaches for the wheelchair control system. The first one was an assisted control in which the user was continuously involved in controlling the movement of the wheelchair in the environment and the inclination of the different parts of the seat through the user’s gaze and eye blinks obtained with the interface. The second approach was to take the first steps to apply the device to an autonomous wheelchair control in which the wheelchair moves autonomously avoiding collisions towards the position defined by the user. To this end, the basis for obtaining the gaze position relative to the wheelchair and the object detection was developed in this project to be able to calculate in the future the optimal route to which the wheelchair should move. In addition, the integration of a robotic arm in the wheelchair to manipulate different objects was also considered, obtaining in this work the object of interest indicated by the user's gaze within the detected objects so that in the future the robotic arm could select and pick up the object the user wants to manipulate. In addition to the two approaches, an attempt was also made to estimate the user's gaze without the software interface. For this purpose, the gaze is obtained from pupil detection libraries, a calibration and a mathematical model that relates pupil positions to gaze. The results of the implementations have been analysed in this work, including some limitations encountered. Nevertheless, future improvements are proposed, with the aim of increasing the independence of wheelchair user
    • …
    corecore