893 research outputs found

    EyePACT: eye-based parallax correction on touch-enabled interactive displays

    Get PDF
    The parallax effect describes the displacement between the perceived and detected touch locations on a touch-enabled surface. Parallax is a key usability challenge for interactive displays, particularly for those that require thick layers of glass between the screen and the touch surface to protect them from vandalism. To address this challenge, we present EyePACT, a method that compensates for input error caused by parallax on public displays. Our method uses a display-mounted depth camera to detect the user's 3D eye position in front of the display and the detected touch location to predict the perceived touch location on the surface. We evaluate our method in two user studies in terms of parallax correction performance as well as multi-user support. Our evaluations demonstrate that EyePACT (1) significantly improves accuracy even with varying gap distances between the touch surface and the display, (2) adapts to different levels of parallax by resulting in significantly larger corrections with larger gap distances, and (3) maintains a significantly large distance between two users' fingers when interacting with the same object. These findings are promising for the development of future parallax-free interactive displays

    A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms

    Full text link
    In this paper a review is presented of the research on eye gaze estimation techniques and applications, that has progressed in diverse ways over the past two decades. Several generic eye gaze use-cases are identified: desktop, TV, head-mounted, automotive and handheld devices. Analysis of the literature leads to the identification of several platform specific factors that influence gaze tracking accuracy. A key outcome from this review is the realization of a need to develop standardized methodologies for performance evaluation of gaze tracking systems and achieve consistency in their specification and comparative evaluation. To address this need, the concept of a methodological framework for practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July 201

    3D gaze cursor: continuous calibration and end-point grasp control of robotic actuators

    No full text
    © 2016 IEEE.Eye movements are closely related to motor actions, and hence can be used to infer motor intentions. Additionally, eye movements are in some cases the only means of communication and interaction with the environment for paralysed and impaired patients with severe motor deficiencies. Despite this, eye-tracking technology still has a very limited use as a human-robot control interface and its applicability is highly restricted to 2D simple tasks that operate on screen based interfaces and do not suffice for natural physical interaction with the environment. We propose that decoding the gaze position in 3D space rather than in 2D results into a much richer spatial cursor signal that allows users to perform everyday tasks such as grasping and moving objects via gaze-based robotic teleoperation. Eye tracking in 3D calibration is usually slow - we demonstrate here that by using a full 3D trajectory for system calibration generated by a robotic arm rather than a simple grid of discrete points, gaze calibration in the 3 dimensions can be successfully achieved in short time and with high accuracy. We perform the non-linear regression from eye-image to 3D-end point using Gaussian Process regressors, which allows us to handle uncertainty in end-point estimates gracefully. Our telerobotic system uses a multi-joint robot arm with a gripper and is integrated with our in-house GT3D binocular eye tracker. This prototype system has been evaluated and assessed in a test environment with 7 users, yielding gaze-estimation errors of less than 1cm in the horizontal, vertical and depth dimensions, and less than 2cm in the overall 3D Euclidean space. Users reported intuitive, low-cognitive load, control of the system right from their first trial and were straightaway able to simply look at an object and command through a wink to grasp this object with the robot gripper

    Evaluation of the Tobii EyeX Eye tracking controller and Matlab toolkit for research

    Get PDF
    The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings

    An automated calibration method for non-see-through head mounted displays

    Get PDF
    Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements

    Probabilistic Approach to Robust Wearable Gaze Tracking

    Get PDF
    Creative Commons Attribution License (CC BY 4.0)This paper presents a method for computing the gaze point using camera data captured with a wearable gaze tracking device. The method utilizes a physical model of the human eye, ad- vanced Bayesian computer vision algorithms, and Kalman filtering, resulting in high accuracy and low noise. Our C++ implementation can process camera streams with 30 frames per second in realtime. The performance of the system is validated in an exhaustive experimental setup with 19 participants, using a self-made device. Due to the used eye model and binocular cam- eras, the system is accurate for all distances and invariant to device movement. We also test our system against a best-in-class commercial device which is outperformed for spatial accuracy and precision. The software and hardware instructions as well as the experimental data are pub- lished as open source.Peer reviewe
    • …
    corecore