6 research outputs found

    High-Accuracy Gaze Estimation for Interpolation-Based Eye-Tracking Methods

    Get PDF
    This study investigates the influence of the eye-camera location associated with the accuracy and precision of interpolation-based eye-tracking methods. Several factors can negatively influence gaze estimation methods when building a commercial or off-the-shelf eye tracker device, including the eye-camera location in uncalibrated setups. Our experiments show that the eye-camera location combined with the non-coplanarity of the eye plane deforms the eye feature distribution when the eye-camera is far from the eye’s optical axis. This paper proposes geometric transformation methods to reshape the eye feature distribution based on the virtual alignment of the eye-camera in the center of the eye’s optical axis. The data analysis uses eye-tracking data from a simulated environment and an experiment with 83 volunteer participants (55 males and 28 females). We evaluate the improvements achieved with the proposed methods using Gaussian analysis, which defines a range for high-accuracy gaze estimation between −0.5∘ and 0.5∘. Compared to traditional polynomial-based and homography-based gaze estimation methods, the proposed methods increase the number of gaze estimations in the high-accuracy range

    The influence of eye model parameter variations on simulated eye-tracking outcomes

    Get PDF
    The simulated data used in eye-tracking-related research has been largely generated using normative eye models with little consideration of how the variations in eye biometry found in the population may influence eye-tracking outcomes. This study investigated the influence that variations in eye model parameters have on the ability of simulated data to predict real-world eye-tracking outcomes. The real-world experiments performed by two pertinent comparative studies were replicated in a simulated environment using a high-complexity stochastic eye model that includes anatomically accurate distributions of eye biometry parameters. The outcomes showed that variations in anterior corneal asphericity significantly influence simulated eye-tracking outcomes of both interpolation and model-based gaze estimation algorithms. Other, more commonly varied parameters such as the corneal radius of curvature and foveal offset angle had little influence on simulated outcomes.

    Testing different function fitting methods for mobile eye-tracker calibration

    Get PDF
    During calibration, an eye-tracker fits a mapping function from features to a target gaze point. While there is research on which mapping function to use, little is known about how to best estimate the function's parameters. We investigate how different fitting methods impact accuracy under different noise factors, such as mobile eye-tracker imprecision or detection errors in feature extraction during calibration. For this purpose, a simulation of binocular gaze was developed for a) different calibration patterns and b) different noise characteristics. We found the commonly used polynomial regression via least-squares-error fit often lacks to find good mapping functions when compared to ridge regression. Especially as data becomes noisier, outlier-tolerant fitting methods are of importance. We demonstrate a reduction in mean MSE of 20% by simply using ridge over polynomial fit in a mobile eye-tracking experiment

    Using Priors to Improve Head-Mounted Eye Trackers in Sports

    Get PDF

    Inside-Outカメラを用いた畳み込みニューラネットワークに基づく注視点推定

    Get PDF
    The vision-based gaze estimation system (GES) involves multiple cameras, and such system can estimate gaze direction and what a user is looking at. The inside-out camera is the device to capture user eye and user vision. This system is widely used in many applications because the eye images with the pupil or cornea have much information. These applications have the capability to improve the quality of life of everyone especially a person with a disability. However, an end-user is difficult to access the ability of commercial GES device because of the high price and difficult to use. The budget GES device can be created with a general camera. The common method to estimate the gaze point from the vision-based GES is detected the pupil center position. However, the human eye has variable characteristics and the blinking makes reliable pupil detection is a challenging problem. The state-of-the-art method for the pupil detection is not designed for the wearable camera, the designed for the desktop/TV panels. A small error from the pupil detection can make a large error on gaze point estimation. This thesis presents the novel robust and accurate GES framework by using the learning-based method. The main contributions of this thesis can be divided into two main groups. The first main contribution is to enhance the pupil center detection by creating an effective pupil center detection framework. The second contribution of this thesis is to create the calibration-free GES. The first contribution is to enhance the accuracy of the pupil detection process. Handcraft and learning-based method are used to estimate the pupil center position. We design the handcraft method that using the gradient value and RANSAC ellipse fitting. The pupil center position was estimated by the proposed method and com-pared with the separability filter. The result shows the proposed method has a good performance in term of accuracy and computation time. However, when the user closes the eye, no eye present in the image, or a large unexpected object in the image, the accuracy will be decreased significantly. It is difficult for handcraft method to achieve good accuracy. The learning-based method has the potential to solve the general problem that becomes the focus of this thesis. This thesis presents the convolutional neural network (CNN) model to estimate the pupil position in the various situations. Moreover, this model can recognize the eye states such as open, middle, or closed eyes. The second contribution is to create the calibration-free GES. The calibration process is the process to create the coordinate transfer (CT) function. The CT function uses for transfer the pupil position to the gaze point on-scene image. When the wearable camera moves during the use case, the static CT function cannot estimate the gaze point accurately. The learning-based method has a potential to create a robust and adaptive CT function. The accurate calibration-free system can raise the accuracy of the GES. Furthermore, it makes the GES easy easier to use. We designed the CNN framework that has the ability to estimate the gaze position in the various situations. This thesis also presents the process to create the reliable dataset for GES. The result shows that proposed calibration-free GES can estimation the gaze point when glasses are moved.九州工業大学博士学位論文 学位記番号:情工博甲第338号 学位授与年月日:平成31年3月25日1 Introduction|2 Pupil Detection using handcraft method|3 Convolutional neural network| 4 Pupil detection using CNN method|5 Calibration free approach for GES|6 Character input system|7 Conclusion九州工業大学平成30年

    Robust Eye Tracking Based on Adaptive Fusion of Multiple Cameras

    Get PDF
    Eye and gaze movements play an essential role in identifying individuals' emotional states, cognitive activities, interests, and attention among other behavioral traits. Besides, they are natural, fast, and implicitly reflect the targets of interest, which makes them a highly valuable input modality in human-computer interfaces. Therefore, tracking gaze movements, in other words, eye tracking is of great interest to a large number of disciplines, including human behaviour research, neuroscience, medicine, and human-computer interaction. Tracking gaze movements accurately is a challenging task, especially under unconstrained conditions. Over the last two decades, significant advances have been made in improving the gaze estimation accuracy. However, these improvements have been achieved mostly under controlled settings. Meanwhile, several concerns have arisen, such as the complexity, inflexibility and cost of the setups, increased user effort, and high sensitivity to varying real-world conditions. Despite various attempts and promising enhancements, existing eye tracking systems are still inadequate to overcome most of these concerns, which prevent them from being widely used. In this thesis, we revisit these concerns and introduce a novel multi-camera eye tracking framework. The proposed framework achieves a high estimation accuracy while requiring a minimal user effort and a non-intrusive flexible setup. In addition, it provides improved robustness to large head movements, illumination changes, use of eye wear, and eye type variations across users. We develop a novel real-time gaze estimation framework based on adaptive fusion of multiple single-camera systems, in which the gaze estimation relies on projective geometry. Besides, to ease the user calibration procedure, we investigate several methods to model the subject-specific estimation bias, and consequently, propose a novel approach based on weighted regularized least squares regression. The proposed method provides a better calibration modeling than state-of-the-art methods, particularly when using low-resolution and limited calibration data. Being able to operate with low-resolution data also enables to utilize a large field-of-view setup, so that large head movements are allowed. To address aforementioned robustness concerns, we propose to leverage multiple eye appearances simultaneously acquired from various views. In comparison with conventional single view approach, the main benefit of our approach is to more reliably detect gaze features under challenging conditions, especially when they are obstructed due to large head pose or movements, or eye glasses effects. We further propose an adaptive fusion mechanism to effectively combine the gaze outputs obtained from multi-view appearances. To this effect, our mechanism firstly determines the estimation reliability of each gaze output and then performs a reliability-based weighted fusion to compute the overall point of regard. In addition, to address illumination and eye type robustness, the setup is built upon active illumination and robust feature detection methods are developed. The proposed framework and methods are validated through extensive simulations and user experiments featuring 20 subjects. The results demonstrate that our framework provides not only a significant improvement in gaze estimation accuracy but also a notable robustness to real-world conditions, making it suitable for a large spectrum of applications
    corecore