6 research outputs found

    Inside-Outカメラを用いた畳み込みニューラネットワークに基づく注視点推定

    Get PDF
    The vision-based gaze estimation system (GES) involves multiple cameras, and such system can estimate gaze direction and what a user is looking at. The inside-out camera is the device to capture user eye and user vision. This system is widely used in many applications because the eye images with the pupil or cornea have much information. These applications have the capability to improve the quality of life of everyone especially a person with a disability. However, an end-user is difficult to access the ability of commercial GES device because of the high price and difficult to use. The budget GES device can be created with a general camera. The common method to estimate the gaze point from the vision-based GES is detected the pupil center position. However, the human eye has variable characteristics and the blinking makes reliable pupil detection is a challenging problem. The state-of-the-art method for the pupil detection is not designed for the wearable camera, the designed for the desktop/TV panels. A small error from the pupil detection can make a large error on gaze point estimation. This thesis presents the novel robust and accurate GES framework by using the learning-based method. The main contributions of this thesis can be divided into two main groups. The first main contribution is to enhance the pupil center detection by creating an effective pupil center detection framework. The second contribution of this thesis is to create the calibration-free GES. The first contribution is to enhance the accuracy of the pupil detection process. Handcraft and learning-based method are used to estimate the pupil center position. We design the handcraft method that using the gradient value and RANSAC ellipse fitting. The pupil center position was estimated by the proposed method and com-pared with the separability filter. The result shows the proposed method has a good performance in term of accuracy and computation time. However, when the user closes the eye, no eye present in the image, or a large unexpected object in the image, the accuracy will be decreased significantly. It is difficult for handcraft method to achieve good accuracy. The learning-based method has the potential to solve the general problem that becomes the focus of this thesis. This thesis presents the convolutional neural network (CNN) model to estimate the pupil position in the various situations. Moreover, this model can recognize the eye states such as open, middle, or closed eyes. The second contribution is to create the calibration-free GES. The calibration process is the process to create the coordinate transfer (CT) function. The CT function uses for transfer the pupil position to the gaze point on-scene image. When the wearable camera moves during the use case, the static CT function cannot estimate the gaze point accurately. The learning-based method has a potential to create a robust and adaptive CT function. The accurate calibration-free system can raise the accuracy of the GES. Furthermore, it makes the GES easy easier to use. We designed the CNN framework that has the ability to estimate the gaze position in the various situations. This thesis also presents the process to create the reliable dataset for GES. The result shows that proposed calibration-free GES can estimation the gaze point when glasses are moved.九州工業大学博士学位論文 学位記番号:情工博甲第338号 学位授与年月日:平成31年3月25日1 Introduction|2 Pupil Detection using handcraft method|3 Convolutional neural network| 4 Pupil detection using CNN method|5 Calibration free approach for GES|6 Character input system|7 Conclusion九州工業大学平成30年

    Inside-Outカメラを用いた畳み込みニューラルネットワークに基づく注視点推定

    Get PDF
    九州工業大学博士学位論文(要旨)学位記番号:情工博甲第338号 学位授与年月日:平成31年3月25

    CNN-Based Pupil Center Detection for Wearable Gaze Estimation System

    No full text
    This paper presents a convolutional neural network- (CNN-) based pupil center detection method for a wearable gaze estimation system using infrared eye images. Potentially, the pupil center position of a user’s eye can be used in various applications, such as human-computer interaction, medical diagnosis, and psychological studies. However, users tend to blink frequently; thus, estimating gaze direction is difficult. The proposed method uses two CNN models. The first CNN model is used to classify the eye state and the second is used to estimate the pupil center position. The classification model filters images with closed eyes and terminates the gaze estimation process when the input image shows a closed eye. In addition, this paper presents a process to create an eye image dataset using a wearable camera. This dataset, which was used to evaluate the proposed method, has approximately 20,000 images and a wide variation of eye states. We evaluated the proposed method from various perspectives. The result shows that the proposed method obtained good accuracy and has the potential for application in wearable device-based gaze estimation
    corecore