356 research outputs found

    Imperceptible electrooculography graphene sensor system for human-robot interface

    Get PDF
    Electrooculography (EOG) is a method to record the electrical potential between the cornea and the retina of human eyes. Despite many applications of EOG in both research and medical diagnosis for many decades, state-of-the-art EOG sensors are still bulky, stiff, and uncomfortable to wear. Since EOG has to be measured around the eye, a prominent area for appearance with delicate skin, mechanically and optically imperceptible EOG sensors are highly desirable. Here, we report an imperceptible EOG sensor system based on noninvasive graphene electronic tattoos (GET), which are ultrathin, ultrasoft, transparent, and breathable. The GET EOG sensors can be easily laminated around the eyes without using any adhesives and they impose no constraint on blinking or facial expressions. High-precision EOG with an angular resolution of 4 degrees of eye movement can be recorded by the GET EOG and eye movement can be accurately interpreted. Imperceptible GET EOG sensors have been successfully applied for human-robot interface (HRI). To demonstrate the functionality of GET EOG sensors for HRI, we connected GET EOG sensors to a wireless transmitter attached to the collar such that we can use eyeball movements to wirelessly control a quadcopter in real time

    Review of real brain-controlled wheelchairs

    Get PDF
    This paper presents a review of the state of the art regarding wheelchairs driven by a brain-computer interface (BCI). Using a brain-controlled wheelchair (BCW), disabled users could handle a wheelchair through their brain activity, granting autonomy to move through an experimental environment. A classification is established, based on the characteristics of the BCW, such as the type of electroencephalographic (EEG) signal used, the navigation system employed by the wheelchair, the task for the participants, or the metrics used to evaluate the performance. Furthermore, these factors are compared according to the type of signal used, in order to clarify the differences among them. Finally, the trend of current research in this field is discussed, as well as the challenges that should be solved in the future

    Design of a Wearable Eye-Movement Detection System Based on Electrooculography Signals and Its Experimental Validation.

    Full text link
    In the assistive research area, human-computer interface (HCI) technology is used to help people with disabilities by conveying their intentions and thoughts to the outside world. Many HCI systems based on eye movement have been proposed to assist people with disabilities. However, due to the complexity of the necessary algorithms and the difficulty of hardware implementation, there are few general-purpose designs that consider practicality and stability in real life. Therefore, to solve these limitations and problems, an HCI system based on electrooculography (EOG) is proposed in this study. The proposed classification algorithm provides eye-state detection, including the fixation, saccade, and blinking states. Moreover, this algorithm can distinguish among ten kinds of saccade movements (i.e., up, down, left, right, farther left, farther right, up-left, down-left, up-right, and down-right). In addition, we developed an HCI system based on an eye-movement classification algorithm. This system provides an eye-dialing interface that can be used to improve the lives of people with disabilities. The results illustrate the good performance of the proposed classification algorithm. Moreover, the EOG-based system, which can detect ten different eye-movement features, can be utilized in real-life applications

    On the Use of Electrooculogram for Efficient Human Computer Interfaces

    Get PDF
    The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. We have made several experiments to compare the P300-based BCI speller and EOG-based new system. A five-letter word can be written on average in 25 seconds and in 105 seconds with the EEG-based device. Giving message such as “clean-up” could be performed in 3 seconds with the new system. The new system is more efficient than P300-based BCI system in terms of accuracy, speed, applicability, and cost efficiency. Using EOG signals, it is possible to improve the communication abilities of those patients who can move their eyes

    Prediction of Digital Eye Strain Due to Online Learning Based on the Number of Blinks

    Get PDF
    Eye strain is a big concern, especially when it comes to continuous and prolonged online learning. If this is allowed to continue, it will result in Computer Vision Syndrome, also known as Digital Eye Strain (DES), which includes headaches, blurred vision, dry eyes, and even neck and shoulder pain. This condition can be observed either directly based on excessive eye blinking or indirectly based on observations of the electrical activity of eye movements or electrooculography (EOG). The observed blink signal from the EOG, as a representation of eye strain, is the focus of this study. Data acquisition was obtained using the EOG sensor and was carried out on the condition that the participants were conducting online learning activities. There are four different modes of observation taken in succession: when the eye is in a viewing state but without blinking, when the eye blinks intentionally, when the eye is closed, and finally when the eye sees naturally. Observation time is 10s, 20s and 30s, where each interval is performed three times for every mode. The obtained signal is processed by the proposed method. The resulting signal is then labeled as a Blinking signal. Determination of the number of blinks or CNT_PEAK is the result of training this signal by tuning its threshold and width. If the number of blinks is less than or more than 17 then the system will provide a prediction of eye status which is stated in two categories, the first is normal eye while the last is eye strain or fatigue

    EMG-based eye gestures recognition for hands free interfacing

    Get PDF
    This study investigates the utilization of an Electromyography (EMG) based device to recognize five eye gestures and classify them to have a hands free interaction with different applications. The proposed eye gestures in this work includes Long Blinks, Rapid Blinks, Wink Right, Wink Left and finally Squints or frowns. The MUSE headband, which is originally a Brain Computer Interface (BCI) that measures the Electroencephalography (EEG) signals, is the device used in our study to record the EMG signals from behind the earlobes via two Smart rubber sensors and at the forehead via two other electrodes. The signals are considered as EMG once they involve the physical muscular stimulations, which are considered as artifacts for the EEG Brain signals for other studies. The experiment is conducted on 15 participants (12 Males and 3 Females) randomly as no specific groups were targeted and the session was video taped for reevaluation. The experiment starts with the calibration phase to record each gesture three times per participant through a developed Voice narration program to unify the test conditions and time intervals among all subjects. In this study, a dynamic sliding window with segmented packets is designed to faster process the data and analyze it, as well as to provide more flexibility to classify the gestures regardless their duration from one user to another. Additionally, we are using the thresholding algorithm to extract the features from all the gestures. The Rapid Blinks and the Squints were having high F1 Scores of 80.77% and 85.71% for the Trained Thresholds, as well as 87.18% and 82.12% for the Default or manually adjusted thresholds. The accuracies of the Long Blinks, Rapid Blinks and Wink Left were relatively higher with the manually adjusted thresholds, while the Squints and the Wink Right were better with the trained thresholds. However, more improvements were proposed and some were tested especially after monitoring the participants actions from the video recordings to enhance the classifier. Most of the common irregularities met are discussed within this study so as to pave the road for further similar studies to tackle them before conducting the experiments. Several applications need minimal physical or hands interactions and this study was originally a part of the project at HCI Lab, University of Stuttgart to make a hands-free switching between RGB, thermal and depth cameras integrated on or embedded in an Augmented Reality device designed for the firefighters to increase their visual capabilities in the field

    Brain-computer interface for robot control with eye artifacts for assistive applications

    Get PDF
    Human-robot interaction is a rapidly developing field and robots have been taking more active roles in our daily lives. Patient care is one of the fields in which robots are becoming more present, especially for people with disabilities. People with neurodegenerative disorders might not consciously or voluntarily produce movements other than those involving the eyes or eyelids. In this context, Brain-Computer Interface (BCI) systems present an alternative way to communicate or interact with the external world. In order to improve the lives of people with disabilities, this paper presents a novel BCI to control an assistive robot with user's eye artifacts. In this study, eye artifacts that contaminate the electroencephalogram (EEG) signals are considered a valuable source of information thanks to their high signal-to-noise ratio and intentional generation. The proposed methodology detects eye artifacts from EEG signals through characteristic shapes that occur during the events. The lateral movements are distinguished by their ordered peak and valley formation and the opposite phase of the signals measured at F7 and F8 channels. This work, as far as the authors' knowledge, is the first method that used this behavior to detect lateral eye movements. For the blinks detection, a double-thresholding method is proposed by the authors to catch both weak blinks as well as regular ones, differentiating itself from the other algorithms in the literature that normally use only one threshold. Real-time detected events with their virtual time stamps are fed into a second algorithm, to further distinguish between double and quadruple blinks from single blinks occurrence frequency. After testing the algorithm offline and in realtime, the algorithm is implemented on the device. The created BCI was used to control an assistive robot through a graphical user interface. The validation experiments including 5 participants prove that the developed BCI is able to control the robot

    Efficient Implementation and Design of A New Single-Channel Electrooculography-based Human-Machine Interface System

    Get PDF
    published_or_final_versio

    Gaze-based teleprosthetic enables intuitive continuous control of complex robot arm use: Writing & drawing

    Get PDF
    Eye tracking is a powerful mean for assistive technologies for people with movement disorders, paralysis and amputees. We present a highly intuitive eye tracking-controlled robot arm operating in 3-dimensional space based on the user's gaze target point that enables tele-writing and drawing. The usability and intuitive usage was assessed by a “tele” writing experiment with 8 subjects that learned to operate the system within minutes of first time use. These subjects were naive to the system and the task and had to write three letters on a white board with a white board pen attached to the robot arm's endpoint. The instructions are to imagine they were writing text with the pen and look where the pen would be going, they had to write the letters as fast and as accurate as possible, given a letter size template. Subjects were able to perform the task with facility and accuracy, and movements of the arm did not interfere with subjects ability to control their visual attention so as to enable smooth writing. On the basis of five consecutive trials there was a significant decrease in the total time used and the total number of commands sent to move the robot arm from the first to the second trial but no further improvement thereafter, suggesting that within writing 6 letters subjects had mastered the ability to control the system. Our work demonstrates that eye tracking is a powerful means to control robot arms in closed-loop and real-time, outperforming other invasive and non-invasive approaches to Brain-Machine-Interfaces in terms of calibration time (<;2 minutes), training time (<;10 minutes), interface technology costs. We suggests that gaze-based decoding of action intention may well become one of the most efficient ways to interface with robotic actuators - i.e. Brain-Robot-Interfaces - and become useful beyond paralysed and amputee users also for the general teleoperation of robotic and exoskeleton in human augmentation
    corecore