57 research outputs found

    3D gaze cursor: continuous calibration and end-point grasp control of robotic actuators

    No full text
    © 2016 IEEE.Eye movements are closely related to motor actions, and hence can be used to infer motor intentions. Additionally, eye movements are in some cases the only means of communication and interaction with the environment for paralysed and impaired patients with severe motor deficiencies. Despite this, eye-tracking technology still has a very limited use as a human-robot control interface and its applicability is highly restricted to 2D simple tasks that operate on screen based interfaces and do not suffice for natural physical interaction with the environment. We propose that decoding the gaze position in 3D space rather than in 2D results into a much richer spatial cursor signal that allows users to perform everyday tasks such as grasping and moving objects via gaze-based robotic teleoperation. Eye tracking in 3D calibration is usually slow - we demonstrate here that by using a full 3D trajectory for system calibration generated by a robotic arm rather than a simple grid of discrete points, gaze calibration in the 3 dimensions can be successfully achieved in short time and with high accuracy. We perform the non-linear regression from eye-image to 3D-end point using Gaussian Process regressors, which allows us to handle uncertainty in end-point estimates gracefully. Our telerobotic system uses a multi-joint robot arm with a gripper and is integrated with our in-house GT3D binocular eye tracker. This prototype system has been evaluated and assessed in a test environment with 7 users, yielding gaze-estimation errors of less than 1cm in the horizontal, vertical and depth dimensions, and less than 2cm in the overall 3D Euclidean space. Users reported intuitive, low-cognitive load, control of the system right from their first trial and were straightaway able to simply look at an object and command through a wink to grasp this object with the robot gripper

    Gaze Estimation Technique for Directing Assistive Robotics

    Get PDF
    AbstractAssistive robotics may extend capabilities for individuals with reduced mobility or dexterity. However, effective use of robotic agents typically requires the user to issue control commands in the form of speech, gesture, or text. Thus, for unskilled or impaired users, the need for a paradigm of intuitive Human-Robot Interaction (HRI) is prevalent. It can be inferred that the most productive interactions are those in which the assistive agent is able to ascertain the intention of the user. Also, to perform a task, the agent must know the user's area of attention in three-dimensional space. Eye gaze tracking can be used as a method to determine a specific Volume of Interest (VOI). However, gaze tracking has heretofore been under-utilized as a means of interaction and control in 3D space. This research aims to determine a practical volume of interest in which an individual's eyes are focused by combining past methods in order to achieve greater effectiveness. The proposed method makes use of eye vergence as a useful depth discriminant to generate a tool for improved robot path planning. This research investigates the accuracy of the Vector Intersection (VI) model when applied to a usably large workspace volume. A neural network is also used in tandem with the VI model to create a combined model. The output of the combined model is a VOI that can be used as an aid in a number of applications including robot path planning, entertainment, ubiquitous computing, and others

    A technique for estimating three-dimensional volume-of-interest using eye gaze

    Get PDF
    Assistive robotics promises to be of use to those who have limited mobility or dexterity. Moreover, those who have limited movement of limbs can benefit greatly from such assistive devices. However, to use such devices, one would need to give commands to an assistive agent, often in the form of speech, gesture, or text. The need for a more convenient method of Human-Robot Interaction (HRI) is prevalent, especially for impaired users because of severe mobility constraints. For a socially responsive assistive device to be an effective aid, the device generally should understand the intention of the user. Also, to perform a task based on gesture, the assistive device requires the user's area of attention in three-dimensional (3D) space. Gaze tracking can be used as a method to determine a specific volume of interest (VOI). However, heretofore gaze tracking has been under-utilized as a means of interaction and control in 3Dspace. The main objective of this research is to determine a practical VOI in which an individual's eyes are focused by combining existing methods. Achieving this objective sets a foundation for further use of vergence data as a useful discriminant to generate a proper directive technique for assistive robotics. This research investigates the accuracy of the Vector Intersection (VI) model when applied to a usable workspace. A neural network is also applied to gaze data for use in tandem with the VI model to create a Combined Model. The output of the Combined Model is a VOI that can be used to aid in a number of applications including robot path planning, entertainment, ubiquitous computing, and others. An alternative Search Region method is investigated as well

    Low-Cost Based Eye Tracking and Eye Gaze Estimation

    Get PDF
    The costs of current gaze tracking systems remain too high for general public use. The main reason for this is the cost of parts, especially high-quality cameras and lenses and cost development. This research build the low cost based for gaze tracking system. The device is built by utilizing of modified web camera in infrared spectrum. A new technique is also proposed here in order to detect the center pupil coordinate based on connected component labeling. By combination the pupils coordinate detection method with third order polynomial regression in calibration process to determine the gaze point. The experiment results show our system has an acceptable accuracy rate with error pixel 0.39o in visual degree

    The mean point of vergence is biased under projection

    Get PDF
    The point of interest in three-dimensional space in eye tracking is often computed based on intersecting the lines of sight with geometry, or finding the point closest to the two lines of sight. We first start by theoretical analysis with synthetic simulations. We show that the mean point of vergence is generally biased for centrally symmetric errors and that the bias depends on the horizontal vs. vertical error distribution of the tracked eye positions. Our analysis continues with an evaluation on real experimental data. The error distributions seem to be different among individuals but they generally leads to the same bias towards the observer. And it tends to be larger with an increased viewing distance. We also provided a recipe to minimize the bias, which applies to general computations of eye ray intersection. These findings not only have implications for choosing the calibration method in eye tracking experiments and interpreting the observed eye movements data; but also suggest to us that we shall consider the mathematical models of calibration as part of the experiment

    Monocular gaze depth estimation using the vestibulo-ocular reflex

    Get PDF
    Gaze depth estimation presents a challenge for eye tracking in 3D. This work investigates a novel approach to the problem based on eye movement mediated by the vestibulo-ocular reflex (VOR). VOR stabilises gaze on a target during head movement, with eye movement in the opposite direction, and the VOR gain increases the closer the fixated target is to the viewer. We present a theoretical analysis of the relationship between VOR gain and depth which we investigate with empirical data collected in a user study (N=10). We show that VOR gain can be captured using pupil centres, and propose and evaluate a practical method for gaze depth estimation based on a generic function of VOR gain and two-point depth calibration. The results show that VOR gain is comparable with vergence in capturing depth while only requiring one eye, and provide insight into open challenges in harnessing VOR gain as a robust measure

    Resolving Target Ambiguity in 3D Gaze Interaction through VOR Depth Estimation

    Get PDF
    Target disambiguation is a common problem in gaze interfaces, as eye tracking has accuracy and precision limitations. In 3D environments this is compounded by objects overlapping in the field of view, as a result of their positioning at different depth with partial occlusion. We introduce \textit{VOR depth estimation}, a method based on the vestibulo-ocular reflex of the eyes in compensation of head movement, and explore its application to resolve target ambiguity. The method estimates gaze depth by comparing the rotations of the eye and the head when the users look at a target and deliberately rotate their head. We show that VOR eye movement presents an alternative to vergence for gaze depth estimation, that is feasible also with monocular tracking. In an evaluation of its use for target disambiguation, our method outperforms vergence for targets presented at greater depth

    A Novel Authentication Method Using Multi-Factor Eye Gaze

    Get PDF
    A method for novel, rapid and robust one-step multi-factor authentication of a user is presented, employing multi-factor eye gaze. The mobile environment presents challenges that render the conventional password model obsolete. The primary goal is to offer an authentication method that competitively replaces the password, while offering improved security and usability. This method and apparatus combine the smooth operation of biometric authentication with the protection of knowledge based authentication to robustly authenticate a user and secure information on a mobile device in a manner that is easily used and requires no external hardware. This work demonstrates a solution comprised of a pupil segmentation algorithm, gaze estimation, and an innovative application that allows a user to authenticate oneself using gaze as the interaction medium

    Using Priors to Improve Head-Mounted Eye Trackers in Sports

    Get PDF
    corecore