3 research outputs found

    Gaze Estimation Technique for Directing Assistive Robotics

    Get PDF
    AbstractAssistive robotics may extend capabilities for individuals with reduced mobility or dexterity. However, effective use of robotic agents typically requires the user to issue control commands in the form of speech, gesture, or text. Thus, for unskilled or impaired users, the need for a paradigm of intuitive Human-Robot Interaction (HRI) is prevalent. It can be inferred that the most productive interactions are those in which the assistive agent is able to ascertain the intention of the user. Also, to perform a task, the agent must know the user's area of attention in three-dimensional space. Eye gaze tracking can be used as a method to determine a specific Volume of Interest (VOI). However, gaze tracking has heretofore been under-utilized as a means of interaction and control in 3D space. This research aims to determine a practical volume of interest in which an individual's eyes are focused by combining past methods in order to achieve greater effectiveness. The proposed method makes use of eye vergence as a useful depth discriminant to generate a tool for improved robot path planning. This research investigates the accuracy of the Vector Intersection (VI) model when applied to a usably large workspace volume. A neural network is also used in tandem with the VI model to create a combined model. The output of the combined model is a VOI that can be used as an aid in a number of applications including robot path planning, entertainment, ubiquitous computing, and others

    Uncertainty visualization of gaze estimation to support operator-controlled calibration

    Get PDF
          In this paper, we investigate how visualization assets can support the qualitative evaluation of gaze estimation uncertainty. Although eye tracking data are commonly available, little has been done to visually investigate the uncertainty of recorded gaze information. This paper tries to fill this gap by using innovative uncertainty computation and visualization. Given a gaze processing pipeline, we estimate the location of this gaze position in the world camera. To do so we developed our own gaze data processing which give us access to every stage of the data transformation and thus the uncertainty computation. To validate our gaze estimation pipeline, we designed an experiment with 12 participants and showed that the correction methods we proposed reduced the Mean Angular Error by about 1.32 cm, aggregating all 12 participants’ results. The Mean Angular Error is 0.25° (SD=0.15°) after correction of the estimated gaze. Next, to support the qualitative assessment of this data, we provide a map which codes the actual uncertainty in the user point of view.

    An investigation of the distribution of gaze estimation errors in head mounted gaze trackers using polynomial functions

    Get PDF
    Second order polynomials are commonly used for estimating the point-of-gaze in head-mounted eye trackers. Studies in remote (desktop) eye trackers show that although some non-standard 3rd order polynomial models could provide better accuracy, high-order polynomials do not necessarily provide better results. Different than remote setups though, where gaze is estimated over a relatively narrow field-of-view surface (e.g. less than 30x20 degrees on typical computer displays), head-mounted gaze trackers (HMGT) are often desired to cover a relatively wider field-of-view to make sure that the gaze is detected in the scene image even for extreme eye angles. In this paper we investigate the behavior of the gaze estimation error distribution throughout the image of the scene camera when using polynomial functions. Using simulated scenarios, we describe effects of four different sources of error: interpolation, extrapolation, parallax, and radial distortion. We show that the use of third order polynomials result in more accurate gaze estimates in HMGT, and that the use of wide angle lenses might be beneficial in terms of error reduction
    corecore