20 research outputs found
High-throughput data analysis in behavior genetics
In recent years, a growing need has arisen in different fields for the
development of computational systems for automated analysis of large amounts of
data (high-throughput). Dealing with nonstandard noise structure and outliers,
that could have been detected and corrected in manual analysis, must now be
built into the system with the aid of robust methods. We discuss such problems
and present insights and solutions in the context of behavior genetics, where
data consists of a time series of locations of a mouse in a circular arena. In
order to estimate the location, velocity and acceleration of the mouse, and
identify stops, we use a nonstandard mix of robust and resistant methods:
LOWESS and repeated running median. In addition, we argue that protection
against small deviations from experimental protocols can be handled
automatically using statistical methods. In our case, it is of biological
interest to measure a rodent's distance from the arena's wall, but this measure
is corrupted if the arena is not a perfect circle, as required in the protocol.
The problem is addressed by estimating robustly the actual boundary of the
arena and its center using a nonparametric regression quantile of the
behavioral data, with the aid of a fast algorithm developed for that purpose.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS304 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Eye centre localisation: An unsupervised modular approach
© Emerald Group Publishing Limited. Purpose - This paper aims to introduce an unsupervised modular approach for eye centre localisation in images and videos following a coarse-to-fine, global-to-regional scheme. The design of the algorithm aims at excellent accuracy, robustness and real-time performance for use in real-world applications. Design/methodology/approach - A modular approach has been designed that makes use of isophote and gradient features to estimate eye centre locations. This approach embraces two main modalities that progressively reduce global facial features to local levels for more precise inspections. A novel selective oriented gradient (SOG) filter has been specifically designed to remove strong gradients from eyebrows, eye corners and self-shadows, which sabotage most eye centre localisation methods. The proposed algorithm, tested on the BioID database, has shown superior accuracy. Findings - The eye centre localisation algorithm has been compared with 11 other methods on the BioID database and six other methods on the GI4E database. The proposed algorithm has outperformed all the other algorithms in comparison in terms of localisation accuracy while exhibiting excellent real-time performance. This method is also inherently robust against head poses, partial eye occlusions and shadows. Originality/value - The eye centre localisation method uses two mutually complementary modalities as a novel, fast, accurate and robust approach. In addition, other than assisting eye centre localisation, the SOG filter is able to resolve general tasks regarding the detection of curved shapes. From an applied point of view, the proposed method has great potentials in benefiting a wide range of real-world human-computer interaction (HCI) applications
Direction Estimation Model for Gaze Controlled Systems
Detection of gaze requires estimation of the position and the relation between user’s pupil and glint. This position is mapped into the region of interest using different edge detectors by detecting the glint coordinates and further gaze direction. In this research paper, a Gaze Direction Estimation (GDE) model has been proposed for the comparative analysis of two standard edge detectors Canny and Sobel for estimating automatic detection of the glint, its coordinates and subsequently the gaze direction. The results indicate fairly good percentage of the cases where the correct glint coordinates and subsequently correct gaze direction quadrants have been estimated. These results can further be used for improving the accuracy and performance of different eye gaze based systems
Eye center localization and gaze gesture recognition for human-computer interaction
© 2016 Optical Society of America. This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications
An easy iris center detection method for eye gaze tracking system
Iris center detection accuracy has great impact on eye gaze tracking system performance. This paper proposes an easy and efficient iris center detection method based on modeling the geometric relationship between the detected rough iris center and the two corners of the eye. The method fully considers four states of iris within the eye region, i.e. center, left, right, and upper. The proposed active edge detection algorithm is utilized to extract iris edge points for ellipse fitting. In addition, this paper also presents a predicted edge point algorithm to solve the decrease in ellipse fitting accuracy, when part of the iris becomes hidden from rolling into a nasal or temporal eye corner. The evaluated result of the method on our eye database shows the global average accuracy of 94.3%. Compared with existing methods, our method achieves the highest iris center detection accuracy. Additionally, in order to test the performance of the proposed method in gaze tracking, this paper presents the results of gaze estimation achieved by our eye gaze tracking system
Unobtrusive and pervasive video-based eye-gaze tracking
Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe
Can eye-tracking technology improve situational awareness in paramedic clinical education?
Human factors play a significant part in clinical error. Situational awareness (SA) means being aware of one's surroundings, comprehending the present situation, and being able to predict outcomes. It is a key human skill that, when properly applied, is associated with reducing medical error: eye-tracking technology can be used to provide an objective and qualitative measure of the initial perception component of SA. Feedback from eye-tracking technology can be used to improve the understanding and teaching of SA in clinical contexts, and consequently, has potential for reducing clinician error and the concomitant adverse events
Highly accurate and fully automatic 3D head pose estimation and eye gaze estimation using RGB-D sensors and 3D morphable models
The research presented in the paper was funded by grant F506-FSA of the Auto21 Networks of Centers of Excellence Program of Canada.This work addresses the problem of automatic head pose estimation and its application in 3D gaze estimation using low quality RGB-D sensors without any subject cooperation or manual intervention. The previous works on 3D head pose estimation using RGB-D sensors require either an offline step for supervised learning or 3D head model construction, which may require manual intervention or subject cooperation for complete head model reconstruction. In this paper, we propose a 3D pose estimator based on low quality depth data, which is not limited by any of the aforementioned steps. Instead, the proposed technique relies on modeling the subject's face in 3D rather than the complete head, which, in turn, relaxes all of the constraints in the previous works. The proposed method is robust, highly accurate and fully automatic. Moreover, it does not need any offline step. Unlike some of the previous works, the method only uses depth data for pose estimation. The experimental results on the Biwi head pose database confirm the efficiency of our algorithm in handling large pose variations and partial occlusion. We also evaluated the performance of our algorithm on IDIAP database for 3D head pose and eye gaze estimation.Publisher PDFPeer reviewe