252 research outputs found

    End-to-End Eye Movement Detection Using Convolutional Neural Networks

    Get PDF
    Common computational methods for automated eye movement detection - i.e. the task of detecting different types of eye movement in a continuous stream of gaze data - are limited in that they either involve thresholding on hand-crafted signal features, require individual detectors each only detecting a single movement, or require pre-segmented data. We propose a novel approach for eye movement detection that only involves learning a single detector end-to-end, i.e. directly from the continuous gaze data stream and simultaneously for different eye movements without any manual feature crafting or segmentation. Our method is based on convolutional neural networks (CNN) that recently demonstrated superior performance in a variety of tasks in computer vision, signal processing, and machine learning. We further introduce a novel multi-participant dataset that contains scripted and free-viewing sequences of ground-truth annotated saccades, fixations, and smooth pursuits. We show that our CNN-based method outperforms state-of-the-art baselines by a large margin on this challenging dataset, thereby underlining the significant potential of this approach for holistic, robust, and accurate eye movement protocol analysis

    Gaze Embeddings for Zero-Shot Image Classification

    Get PDF

    Gazedirector: Fully articulated eye gaze redirection in video

    Get PDF
    We present GazeDirector, a new approach for eye gaze redirection that uses model-fitting. Our method first tracks the eyes by fitting a multi-part eye region model to video frames using analysis-by-synthesis, thereby recovering eye region shape, texture, pose, and gaze simultaneously. It then redirects gaze by 1) warping the eyelids from the original image using a model-derived flow field, and 2) rendering and compositing synthesized 3D eyeballs onto the output image in a photorealistic manner. GazeDirector allows us to change where people are looking without person-specific training data, and with full articulation, i.e. we can precisely specify new gaze directions in 3D. Quantitatively, we evaluate both model-fitting and gaze synthesis, with experiments for gaze estimation and redirection on the Columbia gaze dataset. Qualitatively, we compare GazeDirector against recent work on gaze redirection, showing better results especially for large redirection angles. Finally, we demonstrate gaze redirection on YouTube videos by introducing new 3D gaze targets and by manipulating visual behavior

    Inertial Sensor Based Modelling of Human Activity Classes: Feature Extraction and Multi-sensor Data Fusion Using Machine Learning Algorithms

    Get PDF
    Wearable inertial sensors are currently receiving pronounced interest due to applications in unconstrained daily life settings, ambulatory monitoring and pervasive computing systems. This research focuses on human activity recognition problem, in which inputs are multichannel time series signals acquired from a set of body-worn inertial sensors and outputs are automatically classified human activities. A general-purpose framework has been presented for designing and evaluating activity recognition system with six different activities using machine learning algorithms such as support vector machine (SVM) and artificial neural networks (ANN). Several feature selection methods were explored to make the recognition process faster by experimenting on the features extracted from the accelerometer and gyroscope time series data collected from a number of volunteers. In addition, a detailed discussion is presented to explore how different design parameters, for example, the number of features and data fusion from multiple sensor locations - impact on overall recognition performance

    Seeking Optimum System Settings for Physical Activity Recognition on Smartwatches

    Full text link
    Physical activity recognition (PAR) using wearable devices can provide valued information regarding an individual's degree of functional ability and lifestyle. In this regards, smartphone-based physical activity recognition is a well-studied area. Research on smartwatch-based PAR, on the other hand, is still in its infancy. Through a large-scale exploratory study, this work aims to investigate the smartwatch-based PAR domain. A detailed analysis of various feature banks and classification methods are carried out to find the optimum system settings for the best performance of any smartwatch-based PAR system for both personal and impersonal models. To further validate our hypothesis for both personal (The classifier is built using the data only from one specific user) and impersonal (The classifier is built using the data from every user except the one under study) models, we tested single subject validation process for smartwatch-based activity recognition.Comment: 15 pages, 2 figures, Accepted in CVC'1

    Confidentiality and Mental Health/Chaplaincy Collaboration

    Get PDF
    Confidentiality can both facilitate and inhibit working relationships of chaplains and mental health professionals addressing the needs of service members and veterans in the United States. Researchers conducted this study to examine opportunities for improving integration of care within the Department of Defense (DoD) and Department of Veterans Affairs (VA). Interviews were conducted with 198 chaplains and 201 mental health professionals in 33 DoD and VA facilities. Using a blended qualitative research approach, researchers identified several themes from the interviews, including recognition that integration can improve services; chaplaincy confidentiality can facilitate help seeking behavior; and mental health and chaplain confidentiality can inhibit information sharing and active participation on interdisciplinary teams. Cross-disciplinary training on confidentiality requirements and developing policies for sharing information across disciplines is recommended to address barriers to integrated service delivery

    Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography

    Get PDF
    The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for lightweight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user’s eye gaze

    Rendering of eyes for eye-shape registration and gaze estimation

    Get PDF
    © 2015 IEEE. Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model's controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild
    • …
    corecore