11 research outputs found

    Pupil responses during discrete goal-directed movements

    Full text link

    Ability-Based Methods for Personalized Keyboard Generation

    Full text link
    This study introduces an ability-based method for personalized keyboard generation, wherein an individual's own movement and human-computer interaction data are used to automatically compute a personalized virtual keyboard layout. Our approach integrates a multidirectional point-select task to characterize cursor control over time, distance, and direction. The characterization is automatically employed to develop a computationally efficient keyboard layout that prioritizes each user's movement abilities through capturing directional constraints and preferences. We evaluated our approach in a study involving 16 participants using inertial sensing and facial electromyography as an access method, resulting in significantly increased communication rates using the personalized keyboard (52.0 bits/min) when compared to a generically optimized keyboard (47.9 bits/min). Our results demonstrate the ability to effectively characterize an individual's movement abilities to design a personalized keyboard for improved communication. This work underscores the importance of integrating a user's motor abilities when designing virtual interfaces.Comment: 20 pages, 7 figure

    Development and Evaluation of Facial Gesture Recognition and Head Tracking for Assistive Technologies

    Get PDF
    Globally, the World Health Organisation estimates that there are about 1 billion people suffering from disabilities and the UK has about 10 million people suffering from neurological disabilities in particular. In extreme cases these individuals with disabilities such as Motor Neuron Disease(MND), Cerebral Palsy(CP) and Multiple Sclerosis(MS) may only be able to perform limited head movement, move their eyes or make facial gestures. The aim of this research is to investigate low-cost and reliable assistive devices using automatic gesture recognition systems that will enable the most severely disabled user to access electronic assistive technologies and communication devices thus enabling them to communicate with friends and relative. The research presented in this thesis is concerned with the detection of head movements, eye movements, and facial gestures, through the analysis of video and depth images. The proposed system, using web cameras or a RGB-D sensor coupled with computer vision and pattern recognition techniques, will have to be able to detect the movement of the user and calibrate it to facilitate communication. The system will also provide the user with the functionality of choosing the sensor to be used i.e. the web camera or the RGB-D sensor, and the interaction or switching mechanism i.e. eye blink or eyebrows movement to use. This ability to system to enable the user to select according to the user's needs would make it easier on the users as they would not have to learn how to operating the same system as their condition changes. This research aims to explore in particular the use of depth data for head movement based assistive devices and the usability of different gesture modalities as switching mechanisms. The proposed framework consists of a facial feature detection module, a head tracking module and a gesture recognition module. Techniques such as Haar-Cascade and skin detection were used to detect facial features such as the face, eyes and nose. The depth data from the RGB-D sensor was used to segment the area nearest to the sensor. Both the head tracking module and the gesture recognition module rely on the facial feature module as it provided data such as the location of the facial features. The head tracking module uses the facial feature data to calculate the centroid of the face, the distance to the sensor, the location of the eyes and the nose to detect head motion and translate it into pointer movement. The gesture detection module uses features such as the location of the eyes, the location of the pupil, the size of the pupil and calculates the interocular distance for the detection of blink or eyebrows movement to perform a click action. The research resulted in the creation of four assistive devices based on the combination of the sensors (Web Camera and RGB-D sensor) and facial gestures (Blink and Eyebrows movement): Webcam-Blink, Webcam-Eyebrows, Kinect-Blink and Kinect-Eyebrows. Another outcome of this research has been the creation of an evaluation framework based on Fitts' Law with a modified multi-directional task including a central location and a dataset consisting of both colour images and depth data of people performing head movement towards different direction and performing gestures such as eye blink, eyebrows movement and mouth movements. The devices have been tested with healthy participants. From the observed data, it was found that both Kinect-based devices have lower Movement Time and higher Index of Performance and Effective Throughput than the web camera-based devices thus showing that the introduction of the depth data has had a positive impact on the head tracking algorithm. The usability assessment survey, suggests that there is a significant difference in eye fatigue experienced by the participants; blink gesture was less tiring to the eye than eyebrows movement gesture. Also, the analysis of the gestures showed that the Index of Difficulty has a large effect on the error rates of the gesture detection and also that the smaller the Index of Difficulty the higher the error rate

    Noise Challenges in Monomodal Gaze Interaction

    Get PDF
    Modern graphical user interfaces (GUIs) are designed with able-bodied users in mind. Operating these interfaces can be impossible for some users who are unable to control the conventional mouse and keyboard. An eye tracking system offers possibilities for independent use and improved quality of life via dedicated interface tools especially tailored to the users ’ needs (e.g., interaction, communication, e-mailing, web browsing and entertainment). Much effort has been put towards robustness, accuracy and precision of modern eyetracking systems and there are many available on the market. Even though gaze tracking technologies have undergone dramatic improvements over the past years, the systems are still very imprecise. This thesis deals with current challenges of mono-modal gaze interaction and aims at improving access to technology and interface control for users who are limited to the eyes only. Low-cost equipment in eye tracking contributes toward improved affordability but potentially at the cost of introducing more noise in the system due to the lower quality of hardware. This implies that methods of dealing with noise and creative approaches towards getting the best out of the data stream are most wanted. The work in this thesis presents three contributions that may advance the use of low-cost mono-modal gaze tracking and research in the field:- An assessment of a low-cost open-source gaze tracker and two eye tracking systems through an accuracy and precision test and a performance evaluation.- Development and evaluation of a novel innovative 3D typing system with high tolerance to noise that is based on continuous panning and zooming.- Development and evaluation of novel selection tools that compensate for noisy input during small-target selections in modern GUIs. This thesis may be of particular interest for those working on the use of eye trackers for gaze interaction and how to deal with reduced data quality. The work in this thesis is accompanied by several software applications developed for the research projects that can be freely downloaded from the eyeInteract appstore 1

    Study of pupil diameter and eye movements to enhance flight safety. Etude de diamètre pupillaire et de mouvements oculaires pour la sécurité aérienne

    Get PDF
    L'analyse d'événements aériens révèle que la plupart des accidents aéronautiques ont pour origine une surveillance inadaptée de paramètres de vol induite par une vigilance réduite, le stress ou une charge de travail importante. Une solution prometteuse pour améliorer la sécurité aérienne est d'étudier le regard des pilotes. La pupille est un bon indicateur de l'état attentionnel/cognitif tandis que les mouvements oculaires révèlent des stratégies de prises d'information. La question posée dans ce manuscrit est d'évaluer l'apport de l'oculométrie pour la sécurité aérienne par les contributions suivantes : 1-2) Les deux premières études de ce doctorat ont démontré que les effets d'interaction entre la luminance et la charge cognitive sur la réaction pupillaire. La composante pupillaire impactée dépend de la nature de la charge - soutenue ou transitoire. 3) Un cadre mathématique développé fournit un moyen d'illustration de schémas visuels pour l'analyse qualitative. Ce cadre ouvre également la voie à de nouvelles méthodes pour comparer quantitativement ces schémas visuels. 4) Une technique originale d'analyse de fixations et de construction d'un ratio "exploration-exploitation" est proposée et est appliquée dans deux cas d'études en simulateur de vol. 5) Enfin, on propose un cadre théorique d'intégration de l'oculométrie dans les cockpits. Ce cadre comporte quatre étapes présentées dans, à la fois, l'ordre chronologique de l'intégration et la complexité technique de réalisation.Most aviation accidents include failures in monitoring or decision-making which are hampered by arousal, stress or high workload. One promising avenue to further enhance the flight safety is looking into the pilots' eyes. The pupil is a good indicator of cognitive/attentional states while eye movements reveal monitoring strategies. This thesis reflected upon the application of eye tracking in aviation with following contributions: 1-2) The two pupil experiments revealed that the luminance impacts the cognitive pupil reaction. Depending on the nature of the cognitive load - sustained or transient - the corresponding pupillary component would be impacted. The same amount of cognitive load under dimmer luminance condition would elicit larger tonic pupil diameter in a sustained load paradigm and larger phasic pupil response in a transient load paradigm. 3) We designed a novel mathematical framework and method that provide comprehensive illustrations of scanpaths for qualitative analysis. This framework also makes a lane for new methods of scanpaths comparison. 4) The developed technique of analysis of fixations and construction of "explore-exploit" ratio is presented and verifed on the data from two experiments in flight simulators. 5) Eventually, we proposed a framework of eye tracking integration into the cockpits. It contains four stages presented in both chronological order of its integration and technical complexity

    Pupil dilations during target-pointing respect Fitts' law

    No full text

    Proceedings of the Seventeenth Annual Conference on Manual Control

    Get PDF
    Manual control is considered, with concentration on perceptive/cognitive man-machine interaction and interface

    Twelfth Annual Conference on Manual Control

    Get PDF
    Main topics discussed cover multi-task decision making, attention allocation and workload measurement, displays and controls, nonvisual displays, tracking and other psychomotor tasks, automobile driving, handling qualities and pilot ratings, remote manipulation, system identification, control models, and motion and visual cues. Sixty-five papers are included with presentations on results of analytical studies to develop and evaluate human operator models for a range of control task, vehicle dynamics and display situations; results of tests of physiological control systems and applications to medical problems; and on results of simulator and flight tests to determine display, control and dynamics effects on operator performance and workload for aircraft, automobile, and remote control systems

    How to improve learning from video, using an eye tracker

    Get PDF
    The initial trigger of this research about learning from video was the availability of log files from users of video material. Video modality is seen as attractive as it is associated with the relaxed mood of watching TV. The experiments in this research have the goal to gain more insight in viewing patterns of students when viewing video. Students received an awareness instruction about the use of possible alternative viewing behaviors to see whether this would enhance their learning effects. We found that: - the learning effects of students with a narrow viewing repertoire were less than the learning effects of students with a broad viewing repertoire or strategic viewers. - students with some basic knowledge of the topics covered in the videos benefited most from the use of possible alternative viewing behaviors and students with low prior knowledge benefited the least. - the knowledge gain of students with low prior knowledge disappeared after a few weeks; knowledge construction seems worse when doing two things at the same time. - media players could offer more options to help students with their search for the content they want to view again. - there was no correlation between pervasive personality traits and viewing behavior of students. The right use of video in higher education will lead to students and teachers that are more aware of their learning and teaching behavior, to better videos, to enhanced media players, and, finally, to higher learning effects that let users improve their learning from video

    Coordinated Eye and Head Movements for Gaze Interaction in 3D Environments

    Get PDF
    Gaze is attractive for interaction, as we naturally look at objects we are interested in. As a result, gaze has received significant attention within human-computer interaction as an input modality. However, gaze has been limited to only eye movements in situations where head movements are not expected to be used or as head movements in an approximation of gaze when an eye tracker is unavailable. From these observations arise an opportunity and a challenge: we propose to consider gaze as multi-modal in line with psychology and neuroscience research to more accurately represent user movements. The natural coordination of eye and head movements could then enable the development of novel interaction techniques to further the possibilities of gaze as an input modality. However, knowledge of the eye and head coordination in 3D environments and its usage for interaction design is limited. This thesis explores eye and head coordination and their potential for interaction in 3D environments by developing interaction techniques that aim to tackle established gaze-interaction issues. We study fundamental eye, head, and body movements in virtual reality during gaze shifts. From the study results, we design interaction techniques and applications that avoid the Midas touch issue, allow expressive gaze- based interaction, and handle eye tracking accuracy issues. We ground the evaluation of our interaction techniques through empirical studies. From the techniques and study results, we define three design principles for coordinated eye and head interaction from these works that distinguish between eye- only and head-supported gaze shifts, eye-head alignment as input, and distinguishing head movements for gestures and head movements that naturally occur to support gaze. We showcase new directions for gaze-based interaction and present a new way to think about gaze by taking a more comprehensive approach to gaze interaction and showing that there is more to gaze than just the eyes
    corecore