16 research outputs found

    Virtual Keyboard Interaction Using Eye Gaze and Eye Blink

    Get PDF
    A Human-Computer Interaction (HCI) framework that is de-marked for people with serious inabilities to recreate control of a conventional machine mouse is presented. The cam based framework, screens a client's eyes and permits the client to simulate clicking the mouse utilizing deliberate blinks and winks. For clients who can control head developments and can wink with one eye while keeping their other eye obviously open, the framework permits complete utilization of a regular mouse, including moving the pointer, left and right clicking, two fold clicking, and click-and-dragging. For clients who can't wink yet can blink voluntarily the framework permits the client to perform left clicks, the most well-known and helpful mouse activity. The framework does not oblige any preparation information to recognize open eyes versus shut eyes. Eye classification is expert web amid ongoing co-operations. The framework effectively permits the clients to reproduce a tradition machine mouse. It allows users to open a document and perform typing of letters with the help of blinking of their eye. Along with framework allows users to open files and folders present on a desktop. DOI: 10.17762/ijritcc2321-8169.150710

    Gaze Estimation From Multimodal Kinect Data

    Get PDF
    This paper addresses the problem of free gaze estimation under unrestricted head motion. More precisely, unlike previous approaches that mainly focus on estimating gaze towards a small planar screen, we propose a method to estimate the gaze direction in the 3D space. In this context the paper makes the following contributions: (i) leveraging on Kinect device, we propose a multimodal method that rely on depth sensing to obtain robust and accurate head pose tracking even under large head pose, and on the visual data to obtain the remaining eye-in-head gaze directional information from the eye image; (ii) a rectification scheme of the image that exploits the 3D mesh tracking, allowing to conduct a head pose free eye-in-head gaze directional estimation; (iii) a simple way of collecting ground truth data thanks to the Kinect device. Results on three users demonstrate the great potential of our approach

    Investigating Visual Perception Impairments through Serious Games and Eye Tracking to Anticipate Handwriting Difficulties

    Get PDF
    Dysgraphia is a learning disability that causes handwritten production below expectations. Its diagnosis is delayed until the completion of handwriting development. To allow a preventive training program, abilities not directly related to handwriting should be evaluated, and one of them is visual perception. To investigate the role of visual perception in handwriting skills, we gamified standard clinical visual perception tests to be played while wearing an eye tracker at three difficulty levels. Then, we identified children at risk of dysgraphia through the means of a handwriting speed test. Five machine learning models were constructed to predict if the child was at risk, using the CatBoost algorithm with Nested Cross-Validation, with combinations of game performance, eye-tracking, and drawing data as predictors. A total of 53 children participated in the study. The machine learning models obtained good results, particularly with game performances as predictors (F1 score: 0.77 train, 0.71 test). SHAP explainer was used to identify the most impactful features. The game reached an excellent usability score (89.4 +/- 9.6). These results are promising to suggest a new tool for dysgraphia early screening based on visual perception skills

    Using a head-mounted camera to infer attention direction

    Get PDF
    A head-mounted camera was used to measure head direction. The camera was mounted to the forehead of 20 6- and 20 12-month-old infants while they watched an object held at 11 horizontal (-80 degrees to + 80 degrees) and 9 vertical (-48 degrees to + 50 degrees) positions. The results showed that the head always moved less than required to be on target. Below 30 degrees in the horizontal dimension, the head undershoot of object direction was less than 5 degrees. At 80 degrees, however, the undershoot was substantial or between 10 degrees and 15 degrees. In the vertical dimension, the undershoot was larger than in the horizontal dimension. At 30 degrees, the undershoot was around 25% in the downward direction and around 40% in the upward direction. The size of the undershoot was quite consistent between conditions. It was concluded that the head-mounted camera is a useful indicator of horizontal looking direction in a free looking situation where the head is only turned moderately from a straight ahead position

    Learning Coupled Dynamical Systems from human demonstration for robotic eye-arm-hand coordination

    Full text link

    Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction

    Get PDF
    Background The ability to follow one another's gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other's hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader's role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view). Methodology/Principal Findings 43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar's realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower's ability to predict the avatar's action. An analysis of the pattern of frequency across the two players' hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar. Conclusion/Significance This work confirms that people can exploit gaze cues to predict another person's movements and to better coordinate their motions with their partners, even when the partner is a computer-animated avatar. Moreover, this study contributes further evidence that implementing biological features, here task-relevant gaze cues, enable the humanoid robotic avatar to appear more human-like, and thus increase the user's sense of affiliation

    A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms

    Full text link
    In this paper a review is presented of the research on eye gaze estimation techniques and applications, that has progressed in diverse ways over the past two decades. Several generic eye gaze use-cases are identified: desktop, TV, head-mounted, automotive and handheld devices. Analysis of the literature leads to the identification of several platform specific factors that influence gaze tracking accuracy. A key outcome from this review is the realization of a need to develop standardized methodologies for performance evaluation of gaze tracking systems and achieve consistency in their specification and comparative evaluation. To address this need, the concept of a methodological framework for practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July 201

    "Gaze-Based Biometrics: some Case Studies"

    Get PDF
    corecore