50 research outputs found

    An Extensive Study of User Identification via Eye Movements across Multiple Datasets

    Full text link
    Several studies have reported that biometric identification based on eye movement characteristics can be used for authentication. This paper provides an extensive study of user identification via eye movements across multiple datasets based on an improved version of method originally proposed by George and Routray. We analyzed our method with respect to several factors that affect the identification accuracy, such as the type of stimulus, the IVT parameters (used for segmenting the trajectories into fixation and saccades), adding new features such as higher-order derivatives of eye movements, the inclusion of blink information, template aging, age and gender.We find that three methods namely selecting optimal IVT parameters, adding higher-order derivatives features and including an additional blink classifier have a positive impact on the identification accuracy. The improvements range from a few percentage points, up to an impressive 9 % increase on one of the datasets.Comment: 11 pages, 5 figures, submitted to Signal Processing: Image Communicatio

    Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of Input

    Get PDF
    Head movement is widely used as a uniform type of input for human-computer interaction. However, there are fundamental differences between head movements coupled with gaze in support of our visual system, and head movements performed as gestural expression. Both Head-Gaze and Head Gestures are of utility for interaction but differ in their affordances. To facilitate the treatment of Head-Gaze and Head Gestures as separate types of input, we developed HeadBoost as a novel classifier, achieving high accuracy in classifying gaze-driven versus gestural head movement (F1-Score: 0.89). We demonstrate the utility of the classifier with three applications: gestural input while avoiding unintentional input by Head-Gaze; target selection with Head-Gaze while avoiding Midas Touch by head gestures; and switching of cursor control between Head-Gaze for fast positioning and Head Gesture for refinement. The classification of Head-Gaze and Head Gesture allows for seamless head-based interaction while avoiding false activation

    Vision-Based Eye Image Classification for Ophthalmic Measurement Systems

    Get PDF
    : The accuracy and the overall performances of ophthalmic instrumentation, where specific analysis of eye images is involved, can be negatively influenced by invalid or incorrect frames acquired during everyday measurements of unaware or non-collaborative human patients and non-technical operators. Therefore, in this paper, we investigate and compare the adoption of several vision-based classification algorithms belonging to different fields, i.e., Machine Learning, Deep Learning, and Expert Systems, in order to improve the performance of an ophthalmic instrument designed for the Pupillary Light Reflex measurement. To test the implemented solutions, we collected and publicly released PopEYE as one of the first datasets consisting of 15 k eye images belonging to 22 different subjects acquired through the aforementioned specialized ophthalmic device. Finally, we discuss the experimental results in terms of classification accuracy of the eye status, as well as computational load analysis, since the proposed solution is designed to be implemented in embedded boards, which have limited hardware resources in computational power and memory size

    Radi-Eye:Hands-Free Radial Interfaces for 3D Interaction using Gaze-Activated Head-Crossing

    Get PDF
    Eye gaze and head movement are attractive for hands-free 3D interaction in head-mounted displays, but existing interfaces afford only limited control. Radi-Eye is a novel pop-up radial interface designed to maximise expressiveness with input from only the eyes and head. Radi-Eye provides widgets for discrete and continuous input and scales to support larger feature sets. Widgets can be selected with Look & Cross, using gaze for pre-selection followed by head-crossing as trigger and for manipulation. The technique leverages natural eye-head coordination where eye and head move at an offset unless explicitly brought into alignment, enabling interaction without risk of unintended input. We explore Radi-Eye in three augmented and virtual reality applications, and evaluate the effect of radial interface scale and orientation on performance with Look & Cross. The results show that Radi-Eye provides users with fast and accurate input while opening up a new design space for hands-free fluid interaction

    An investigation into gaze-based interaction techniques for people with motor impairments

    Get PDF
    The use of eye movements to interact with computers offers opportunities for people with impaired motor ability to overcome the difficulties they often face using hand-held input devices. Computer games have become a major form of entertainment, and also provide opportunities for social interaction in multi-player environments. Games are also being used increasingly in education to motivate and engage young people. It is important that young people with motor impairments are able to benefit from, and enjoy, them. This thesis describes a program of research conducted over a 20-year period starting in the early 1990's that has investigated interaction techniques based on gaze position intended for use by people with motor impairments. The work investigates how to make standard software applications accessible by gaze, so that no particular modification to the application is needed. The work divides into 3 phases. In the first phase, ways of using gaze to interact with the graphical user interfaces of office applications were investigated, designed around the limitations of gaze interaction. Of these, overcoming the inherent inaccuracies of pointing by gaze at on-screen targets was particularly important. In the second phase, the focus shifted from office applications towards immersive games and on-line virtual worlds. Different means of using gaze position and patterns of eye movements, or gaze gestures, to issue commands were studied. Most of the testing and evaluation studies in this, like the first, used participants without motor-impairments. The third phase of the work then studied the applicability of the research findings thus far to groups of people with motor impairments, and in particular,the means of adapting the interaction techniques to individual abilities. In summary, the research has shown that collections of specialised gaze-based interaction techniques can be built as an effective means of completing the tasks in specific types of games and how these can be adapted to the differing abilities of individuals with motor impairments

    Automatic Gaze Classification for Aviators: Using Multi-task Convolutional Networks as a Proxy for Flight Instructor Observation

    Get PDF
    In this work, we investigate how flight instructors observe aviator scan patterns and assign quality to an aviator\u27s gaze. We first establish the reliability of instructors to assign similar quality to an aviator\u27s scan patterns, and then investigate methods to automate this quality using machine learning. In particular, we focus on the classification of gaze for aviators in a mixed-reality flight simulation. We create and evaluate two machine learning models for classifying gaze quality of aviators: a task-agnostic model and a multi-task model. Both models use deep convolutional neural networks to classify the quality of pilot gaze patterns for 40 pilots, operators, and novices, as compared to visual inspection by three experienced flight instructors. Our multi-task model can automate the process of gaze inspection with an average accuracy of over 93.0% for three separate flight tasks. Our approach could assist existing flight instructors to provide feedback to learners, or it could open the door to more automated feedback for pilots learning to carry out different maneuvers

    Interaktionstechniken fĂĽr mobile Augmented-Reality-Anwendungen basierend auf Blick- und Handbewegungen

    Get PDF
    Intuitive interaction techniques are essential for mobile augmented reality systems. For implicit interaction, this work presents techniques for automatic eye movement analysis and visualization. In the context of explicit interaction, a fusion of optical flow, skin color segmentation, and a hand pose estimator is presented along with a tracking method for localization and pose estimation of a hand in monocular color images

    Interaktionstechniken fĂĽr mobile Augmented-Reality-Anwendungen basierend auf Blick- und Handbewegungen

    Get PDF
    Visuelle Augmented Reality hat das Potential, die Art und Weise, wie der Mensch mit Maschinen kommuniziert, grundlegend zu verändern. Grundvoraussetzung dafür sind angenehm zu tragende binokulare AR-Brillen mit einem großen Sichtfeld für visuelle Einblendungen mit hohem Kontrast, so dass virtuelle Elemente als Teil der realen Umgebung dargestellt und wahrgenommen werden können. Gleichzeitig bedürfen derartige AR-Systeme einer intuitiven Interaktion mit ihrem Benutzer, um akzeptiert zu werden. Blick und Handgesten bilden neben Sprache die Interaktionstechniken der Wahl, um mit virtuellen Elementen zu interagieren. Die vorliegende Arbeit beschäftigt sich mit der Analyse des Blickes für eine implizite unbewusste Interaktion und mit der Erfassung von Handgesten für die explizite Interaktion in mobilen Anwendungen. Es wird eines der ersten Verfahren zur vollautomatischen echtzeitfähigen Blickbewegungsanalyse in dreidimensionalen Umgebungen anhand eines Beispiels aus dem Museumskontext vorgestellt. Dafür wurde eine 3D-Blickpunktberechnung und eine darauf aufsetzende echtzeitfähige Blickanalyse von 3D-Blickpfaden realisiert, als dies mit anderen Blickmessgeräten inklusive zugehöriger Software nicht möglich war. Zusätzlich wird das Verfahren Projected Gaussians für die Darstellung dreidimensionalen Blickverhaltens vorgestellt, das in Echtzeit realistische Visualisierung von Heatmaps in dreidimensionalen Umgebungen erzeugt. Dieses Verfahren ist das weltweit einzige, das die visuelle Schärfe des menschlichen Blickes in die Szene projiziert und damit nah am physikalischen Prozess der Wahrnehmung bleibt. Kein zuvor vorgestelltes Verfahren berücksichtigte Verdeckungen oder ermöglichte eine von der Polygonstruktur unabhängige Einfärbung von Oberflächen. Sowohl das Verfahren zur vollautomatischen Blickanalyse als auch Projected Gaussians wird anhand eines Beispiels auf echte Blickdaten angewendet und die Ergebnisse dieser Analyse werden präsentiert. Für die explizite Interaktion mit den Händen beschäftigt sich diese Arbeit mit dem ersten Schritt der Handgestenerkennung in monokularen Farbbildern: der Handregionsbestimmung. Bei dieser wird die Region der Hand in einem Kamerabild ermittelt. Die entwickelten Verfahren fusionieren auf unterschiedliche Weise optischen Fluss und Segmentierungen von Hautfarbe. Des Weiteren nutzen sie Objektklassifikatoren und Handposenschätzer für eine optimierte Handregionsbestimmung. Letztere wird anschließend mit einem öffentlich verfügbaren 2D-Handposenschätzer fusioniert. Diese Fusion übertrifft bei der 2D-Posenschätzung und geringen erlaubten Abweichungen auf dem öffentlichen Datensatz EgoDexter den aktuellen Stand der Technik der Handposenschätzung, obwohl zugehörige Verfahren trotz monokularen Eingabedaten ihre Schätzungen im dreidimensionalen Raum durchführen. Die Ergebnisse zeigen bei aktuellen 3D-Handposenschätzern für monokulare Eingabebilder ein Defizit bei der Wiederverwendung vorheriger Handposenschätzungen. Das hier vorgestellte Verfahren zur Handregionsbestimmung kann mit jedem Handposenschätzer kombiniert werden

    Interaktionstechniken fĂĽr mobile Augmented-Reality-Anwendungen basierend auf Blick- und Handbewegungen

    Get PDF
    Intuitive interaction techniques are essential for mobile augmented reality systems. For implicit interaction, this work presents techniques for automatic eye movement analysis and visualization. In the context of explicit interaction, a fusion of optical flow, skin color segmentation, and a hand pose estimator is presented along with a tracking method for localization and pose estimation of a hand in monocular color images
    corecore