115 research outputs found

    Virtual Keyboard Interaction Using Eye Gaze and Eye Blink

    Get PDF
    A Human-Computer Interaction (HCI) framework that is de-marked for people with serious inabilities to recreate control of a conventional machine mouse is presented. The cam based framework, screens a client's eyes and permits the client to simulate clicking the mouse utilizing deliberate blinks and winks. For clients who can control head developments and can wink with one eye while keeping their other eye obviously open, the framework permits complete utilization of a regular mouse, including moving the pointer, left and right clicking, two fold clicking, and click-and-dragging. For clients who can't wink yet can blink voluntarily the framework permits the client to perform left clicks, the most well-known and helpful mouse activity. The framework does not oblige any preparation information to recognize open eyes versus shut eyes. Eye classification is expert web amid ongoing co-operations. The framework effectively permits the clients to reproduce a tradition machine mouse. It allows users to open a document and perform typing of letters with the help of blinking of their eye. Along with framework allows users to open files and folders present on a desktop. DOI: 10.17762/ijritcc2321-8169.150710

    A Flexible, Open, Multimodal System of Computer Control Based on Infrared Light

    Get PDF
    This In this paper, a system architecture that can be adapted to an individual’s motor capacity and preferences, to control a computer is presented. The system uses two different transducers based on the emission and the reflection of infrared light. These let to detect of voluntary blinks, winks, saccadic or head movements and/or sequences of them. Transducer selection and operational mode can be configured. The signal provided by the transducer is adapted, processed and sent to a computer by external hardware. The computer runs a row-column scanned switch-controlled Virtual Keyboard (VK). This sends commands to the operating system to control the computer, making possible to run any application such as a web browser, etc. The main system characteristics are flexibility and relatively low-cost hardware.Junta de Andalucía p08-TIC-363

    Improving the performance of GIS/spatial analysts though novel applications of the Emotiv EPOC EEG headset

    Get PDF
    Geospatial information systems are used to analyze spatial data to provide decision makers with relevant, up-to-date, information. The processing time required for this information is a critical component to response time. Despite advances in algorithms and processing power, we still have many “human-in-the-loop” factors. Given the limited number of geospatial professionals, analysts using their time effectively is very important. The automation and faster humancomputer interactions of common tasks that will not disrupt their workflow or attention is something that is very desirable. The following research describes a novel approach to increase productivity with a wireless, wearable, electroencephalograph (EEG) headset within the geospatial workflow

    EMG-based eye gestures recognition for hands free interfacing

    Get PDF
    This study investigates the utilization of an Electromyography (EMG) based device to recognize five eye gestures and classify them to have a hands free interaction with different applications. The proposed eye gestures in this work includes Long Blinks, Rapid Blinks, Wink Right, Wink Left and finally Squints or frowns. The MUSE headband, which is originally a Brain Computer Interface (BCI) that measures the Electroencephalography (EEG) signals, is the device used in our study to record the EMG signals from behind the earlobes via two Smart rubber sensors and at the forehead via two other electrodes. The signals are considered as EMG once they involve the physical muscular stimulations, which are considered as artifacts for the EEG Brain signals for other studies. The experiment is conducted on 15 participants (12 Males and 3 Females) randomly as no specific groups were targeted and the session was video taped for reevaluation. The experiment starts with the calibration phase to record each gesture three times per participant through a developed Voice narration program to unify the test conditions and time intervals among all subjects. In this study, a dynamic sliding window with segmented packets is designed to faster process the data and analyze it, as well as to provide more flexibility to classify the gestures regardless their duration from one user to another. Additionally, we are using the thresholding algorithm to extract the features from all the gestures. The Rapid Blinks and the Squints were having high F1 Scores of 80.77% and 85.71% for the Trained Thresholds, as well as 87.18% and 82.12% for the Default or manually adjusted thresholds. The accuracies of the Long Blinks, Rapid Blinks and Wink Left were relatively higher with the manually adjusted thresholds, while the Squints and the Wink Right were better with the trained thresholds. However, more improvements were proposed and some were tested especially after monitoring the participants actions from the video recordings to enhance the classifier. Most of the common irregularities met are discussed within this study so as to pave the road for further similar studies to tackle them before conducting the experiments. Several applications need minimal physical or hands interactions and this study was originally a part of the project at HCI Lab, University of Stuttgart to make a hands-free switching between RGB, thermal and depth cameras integrated on or embedded in an Augmented Reality device designed for the firefighters to increase their visual capabilities in the field

    Gaze+Hold: Eyes-only Direct Manipulation with Continuous Gaze Modulated by Closure of One Eye

    Get PDF
    The eyes are coupled in their gaze function and therefore usually treated as a single input channel, limiting the range of interactions. However, people are able to open and close one eye while still gazing with the other. We introduce Gaze+Hold as an eyes-only technique that builds on this ability to leverage the eyes as separate input channels, with one eye modulating the state of interaction while the other provides continuous input. Gaze+Hold enables direct manipulation beyond pointing which we explore through the design of Gaze+Hold techniques for a range of user interface tasks. In a user study, we evaluated performance, usability and user’s spontaneous choice of eye for modulation of input. The results show that users are effective with Gaze+Hold. The choice of dominant versus non-dominant eye had no effect on performance, perceived usability and workload. This is significant for the utility of Gaze+Hold as it affords flexibility for mapping of either eye in different configurations

    Hands-Free Gesture and Voice Control for System Interfacing

    Get PDF
    The proposed system presents a simple prototype system for real-time tracking of a human head and speech recognition for hands- free mouse. This system uses a simple yet an effective Face tracking algorithm. The Haar-classifier algorithm is used to capture the frames of the face and Lucas-Kanade algorithm for marking the features of a human face. The general requirements of a real-time tracking algorithm ? it should be computationally economical, should possess the capability to perform in diverse environments and should be able to run itself with a very minimal knowledge about the preexistence of the faces in the head tracking algorithm. This system also makes use of Microsoft Speech SDK 5.1 for speech recognition. It is composed of two fundamental components ? voice recognizer and speech synthesizer. The voice recognizer is used to capture the input of voice signals and speech synthesizer is responsible for lexicon management

    Controlling a Mouse Pointer with a Single-Channel EEG Sensor

    Get PDF
    Goals: The purpose of this study was to analyze the feasibility of using the information obtained from a one-channel electro-encephalography (EEG) signal to control a mouse pointer. We used a low-cost headset, with one dry sensor placed at the FP1 position, to steer a mouse pointer and make selections through a combination of the user’s attention level with the detection of voluntary blinks. There are two types of cursor movements: spinning and linear displacement. A sequence of blinks allows for switching between these movement types, while the attention level modulates the cursor’s speed. The influence of the attention level on performance was studied. Additionally, Fitts’ model and the evolution of the emotional states of participants, among other trajectory indicators, were analyzed. (2) Methods: Twenty participants distributed into two groups (Attention and No-Attention) performed three runs, on different days, in which 40 targets had to be reached and selected. Target positions and distances from the cursor’s initial position were chosen, providing eight different indices of difficulty (IDs). A self-assessment manikin (SAM) test and a final survey provided information about the system’s usability and the emotions of participants during the experiment. (3) Results: The performance was similar to some brain–computer interface (BCI) solutions found in the literature, with an averaged information transfer rate (ITR) of 7 bits/min. Concerning the cursor navigation, some trajectory indicators showed our proposed approach to be as good as common pointing devices, such as joysticks, trackballs, and so on. Only one of the 20 participants reported difficulty in managing the cursor and, according to the tests, most of them assessed the experience positively. Movement times and hit rates were significantly better for participants belonging to the attention group. (4) Conclusions: The proposed approach is a feasible low-cost solution to manage a mouse pointe

    Multimodal Human Eye Blink Recognition Using Z-score Based Thresholding and Weighted Features

    Get PDF
    A novel real-time multimodal eye blink detection method using an amalgam of five unique weighted features extracted from the circle boundary formed from the eye landmarks is proposed. The five features, namely (Vertical Head Positioning, Orientation Factor, Proportional Ratio, Area of Intersection, and Upper Eyelid Radius), provide imperative gen (z score threshold) accurately predicting the eye status and thus the blinking status. An accurate and precise algorithm employing the five weighted features is proposed to predict eye status (open/close). One state-of-the-art dataset ZJU (eye-blink), is used to measure the performance of the method. Precision, recall, F1-score, and ROC curve measure the proposed method performance qualitatively and quantitatively. Increased accuracy (of around 97.2%) and precision (97.4%) are obtained compared to other existing unimodal approaches. The efficiency of the proposed method is shown to outperform the state-of-the-art methods
    • 

    corecore