2 research outputs found

    Distributed Real-Time Computation of the Point of Gaze

    Get PDF
    This paper presents a minimally intrusive real-time gaze-tracking prototype to be used in several scenarios, including a laboratory stall and an in-vehicle system. The system requires specific infrared illumination to allow it to work with variable light conditions. However, it is minimally intrusive due to the use of a carefully configured switched infrared LED array. Although the perceived level of illumination generated by this array is high, it is achieved using low-emission infrared light beams. Accuracy is achieved through a precise estimate of the center of the user's pupil. To overcome inherent time restrictions while using low-cost processors, its main image-processing algorithm has been distributed over four main computing tasks. This structure not only enables good performance, but also simplifies the task of experimenting with alternative computationally-complex algorithms and with alternative tracking models based on locating both user eyes and several cameras to improve user mobility

    A theoretical eye model for uncalibrated real-time eye gaze estimation

    Get PDF
    Computer vision systems that monitor human activity can be utilized for many diverse applications. Some general applications stemming from such activity monitoring are surveillance, human-computer interfaces, aids for the handicapped, and virtual reality environments. For most of these applications, a non-intrusive system is desirable, either for reasons of covertness or comfort. Also desirable is generality across users, especially for humancomputer interfaces and surveillance. This thesis presents a method of gaze estimation that, without calibration, determines a relatively unconstrained user’s overall horizontal eye gaze. Utilizing anthropometric data and physiological models, a simple, yet general eye model is presented. The equations that describe the gaze angle of the eye in this model are presented. The procedure for choosing the proper features for gaze estimation is detailed and the algorithms utilized to find these points are described. Results from manual and automatic feature extraction are presented and analyzed. The error observed from this model is around 3± and the error observed from the implementation is around 6±. This amount of error is comparable to previous eye gaze estimation algorithms and it validates this model. The results presented across a set of subjects display consistency, which proves the generality of this model. A real-time implementation that operates around 17 frames per second displays the efficiency of the algorithms implemented. While there are many interesting directions for future work, the goals of this thesis were achieved
    corecore