131 research outputs found
GAZE ESTIMATION USING SCLERA AND IRIS EXTRACTION
Tracking gaze of an individual provides important information in understanding the behavior of that person. Gaze tracking has been widely used in a variety of applications from tracking consumers gaze fixation on advertisements, controlling human-computer devices, to understanding behaviors of patients with various types of visual and/or neurological disorders such as autism. Gaze pattern can be identified using different methods but most of them require the use of specialized equipments which can be prohibitively expensive for some applications. In this dissertation, we investigate the possibility of using sclera and iris regions captured in a webcam sequence to estimate gaze pattern. The sclera and iris regions in the video frame are first extracted by using an adaptive thresholding technique. The gaze pattern is then determined based on areas of different sclera and iris regions and distances between tracked points along the irises. The technique is novel as sclera regions are often ignored in eye tracking literature while we have demonstrated that they can be easily extracted from images captured by low-cost camera and are useful in determining the gaze pattern. The accuracy and computational efficiency of the proposed technique is demonstrated by experiments with human subjects
A robust sclera segmentation algorithm
Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm’s robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities’ space generated by the classifiers at stage 1. The proposed method was ranked the 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%
Background Subtraction Berbasis Self Organizing Map Untuk Deteksi Objek Bergerak
First part in automatic video analisys is moving object detection. An accurate moving object detection is needed indeed to next step process of automatic video analisys like tracking object detected adn then analyze of detected object. Background Subtraction is a common approach in moving object detection. The common problems in background subtraction are illumination changes, object shadow, and dynamic background like waving tree. Self organizing Maps algorithm apllied in background Subtraction to handles these common problems. Median filtering and morphological operation added after background Subtraction procces in conjunction to increase and produce accurate moving object detection. Apllied SOM, median filtering, and morphological operation in background subtraction increasing object detection accuraccy with value of MSE in 1463,73 and PSNR in 17,035 compare with alpha based background Subtraction where 4268,50 for MSE and 12,018 for PSNR
Development Of Eye Gaze Estimation System Using Two Cameras
Eye Gaze is the direction where a person is looking at. It is suitable to be used as a type of natural Human Computer Interface (HCI). Current researches uses infrared or LED to locate the iris of the user to have better gaze estimation accuracy compared to researches that does not. Infrared and LED are intrusive to human eyes and might cause damage to the cornea and the retina of the eye. This research suggests a non-intrusive approach to locate the iris of the user. By using two remote cameras to capture the images of the user, a better accuracy gaze estimation system can be achieved. The system uses Haar cascade algorithms to detect the face and eye regions. The iris detection uses Hough Circle Transform algorithm to locate the position of the iris, which is critical for the gaze estimation calculation. To enable the system to track the eye and the iris location of the user in real time, the system uses CAMshift (Continuously Adaptive Meanshift) to track the eye and iris of the user. The parameters of the eye and iris are then collected and are used to calculate the gaze direction of the user. The left and right camera achieves 70.00% and 74.67% accuracy respectively. When two cameras are used to estimate the gaze direction, 88.67% accuracy is achieved. This shows that by using two cameras, the accuracy of gaze estimation is improved
Human-Centric Machine Vision
Recently, the algorithms for the processing of the visual information have greatly evolved, providing efficient and effective solutions to cope with the variability and the complexity of real-world environments. These achievements yield to the development of Machine Vision systems that overcome the typical industrial applications, where the environments are controlled and the tasks are very specific, towards the use of innovative solutions to face with everyday needs of people. The Human-Centric Machine Vision can help to solve the problems raised by the needs of our society, e.g. security and safety, health care, medical imaging, and human machine interface. In such applications it is necessary to handle changing, unpredictable and complex situations, and to take care of the presence of humans
- …