1,121 research outputs found

    Graphene textiles towards soft wearable interfaces for electroocular remote control of objects

    Get PDF
    Study of eye movements (EMs) and measurement of the resulting biopotentials, referred to as electrooculography (EOG), may find increasing use in applications within the domain of activity recognition, context awareness, mobile human-computer interaction (HCI) applications, and personalized medicine provided that the limitations of conventional “wet” electrodes are addressed. To overcome the limitations of conventional electrodes, this work, reports for the first time the use and characterization of graphene-based electroconductive textile electrodes for EOG acquisition using a custom-designed embedded eye tracker. This self-contained wearable device consists of a headband with integrated textile electrodes and a small, pocket-worn, battery-powered hardware with real-time signal processing which can stream data to a remote device over Bluetooth. The feasibility of the developed gel-free, flexible, dry textile electrodes was experimentally authenticated through side-by-side comparison with pre-gelled, wet, silver/silver chloride (Ag/AgCl) electrodes, where the simultaneously and asynchronous recorded signals displayed correlation of up to ~87% and ~91% respectively over durations reaching hundred seconds and repeated on several participants. Additionally, an automatic EM detection algorithm is developed and the performance of the graphene-embedded “all-textile” EM sensor and its application as a control element toward HCI is experimentally demonstrated. The excellent success rate ranging from 85% up to 100% for eleven different EM patterns demonstrates the applicability of the proposed algorithm in wearable EOG-based sensing and HCI applications with graphene textiles. The system-level integration and the holistic design approach presented herein which starts from fundamental materials level up to the architecture and algorithm stage is highlighted and will be instrumental to advance the state-of-the-art in wearable electronic devices based on sensing and processing of electrooculograms

    Brain-computer interface for robot control with eye artifacts for assistive applications

    Get PDF
    Human-robot interaction is a rapidly developing field and robots have been taking more active roles in our daily lives. Patient care is one of the fields in which robots are becoming more present, especially for people with disabilities. People with neurodegenerative disorders might not consciously or voluntarily produce movements other than those involving the eyes or eyelids. In this context, Brain-Computer Interface (BCI) systems present an alternative way to communicate or interact with the external world. In order to improve the lives of people with disabilities, this paper presents a novel BCI to control an assistive robot with user's eye artifacts. In this study, eye artifacts that contaminate the electroencephalogram (EEG) signals are considered a valuable source of information thanks to their high signal-to-noise ratio and intentional generation. The proposed methodology detects eye artifacts from EEG signals through characteristic shapes that occur during the events. The lateral movements are distinguished by their ordered peak and valley formation and the opposite phase of the signals measured at F7 and F8 channels. This work, as far as the authors' knowledge, is the first method that used this behavior to detect lateral eye movements. For the blinks detection, a double-thresholding method is proposed by the authors to catch both weak blinks as well as regular ones, differentiating itself from the other algorithms in the literature that normally use only one threshold. Real-time detected events with their virtual time stamps are fed into a second algorithm, to further distinguish between double and quadruple blinks from single blinks occurrence frequency. After testing the algorithm offline and in realtime, the algorithm is implemented on the device. The created BCI was used to control an assistive robot through a graphical user interface. The validation experiments including 5 participants prove that the developed BCI is able to control the robot

    Validating and improving the correction of ocular artifacts in electro-encephalography

    Get PDF
    For modern applications of electro-encephalography, including brain computer interfaces and single-trial Event Related Potential detection, it is becoming increasingly important that artifacts are accurately removed from a recorded electro-encephalogram (EEG) without affecting the part of the EEG that reflects cerebral activity. Ocular artifacts are caused by movement of the eyes and the eyelids. They occur frequently in the raw EEG and are often the most prominent artifacts in EEG recordings. Their accurate removal is therefore an important procedure in nearly all electro-encephalographic research. As a result of this, a considerable number of ocular artifact correction methods have been introduced over the past decades. A selection of these methods, which contains some of the most frequently used correction methods, is given in Section 1.5. When two different correction methods are applied to the same raw EEG, this usually results in two different corrected EEGs. A measure for the accuracy of correction should indicate how well each of these corrected EEGs recovers the part of the raw EEG that truly reflects cerebral activity. The fact that this accuracy cannot be determined directly from a raw EEG is intrinsic to the need for artifact removal. If, based on a raw EEG, it would be possible to derive an exact reference on what the corrected EEG should be, then there would not be any need for adequate artifact correction methods. Estimating the accuracy of correction methods is mostly done either by using models to simulate EEGs and artifacts, or by manipulating the experimental data in such a way that the effects of artifacts to the raw EEG can be isolated. In this thesis, modeling of EEG and artifact is used to validate correction methods based on simulated data. A new correction method is introduced which, unlike all existing methods, uses a camera to monitor eye(lid) movements as a basis for ocular artifact correction. The simulated data is used to estimate the accuracy of this new correction method and to compare it against the estimated accuracy of existing correction methods. The results of this comparison suggest that the new method significantly increases correction accuracy compared to the other methods. Next, an experiment is performed, based on which the accuracy of correction can be estimated on raw EEGs. Results on this experimental data comply very well with the results on the simulated data. It is therefore concluded that using a camera during EEG recordings provides valuable extra information that can be used in the process of ocular artifact correction. In Chapter 2, a model is introduced that assists in estimating the accuracy of eye movement artifacts for simulated EEG recordings. This model simulates EEG and eye movement artifacts simultaneously. For this, the model uses a realistic representation of the head, multiple dipoles to model cerebral and ocular electrical activity, and the boundary element method to calculate changes in electrical potential at different positions on the scalp. With the model, it is possible to simulate different data sets as if they are recorded using different electrode configurations. Signal to noise ratios are used to assess the accuracy of these six correction methods for various electrode configurations before and after applying six different correction methods. Results show that out of the six methods, second order blind identification, SOBI, and multiple linear regression, MLR, correct most accurately overall as they achieve the highest rise in signal to noise ratio. The occurrence of ocular artifacts is linked to changes in eyeball orientation. In Chapter 2 an eye tracker is used to record pupil position, which is closely linked to eyeball orientation. The pupil position information is used in the model to simulate eye movements. Recognizing the potential benefit of using an eye tracker not only for simulations, but also for correction, Chapter 3 introduces an eye movement artifact correction method that exploits the pupil position information that is provided by an eye tracker. Other correction methods use the electrooculogram (EOG) and/or the EEG to estimate ocular artifacts. Because both the EEG and the EOG recordings are susceptive to cerebral activity as well as to ocular activity, these other methods are at risk of overcorrecting the raw EEG. Pupil position information provides a reference that is linked to the ocular artifact in the EEG but that cannot be affected by cerebral activity, and as a result the new correction method avoids having to solve traditionally problematic issues like forward/backward propagation and evaluating the accuracy of component extraction. By using both simulated and experimental data, it is determined how pupil position influences the raw EEG and it is found that this relation is linear or quadratic. A Kalman filter is used for tuning of the parameters that specify the relation. On simulated data, the new method performs very well, resulting in an SNR after correction of over 10 dB for various patterns of eye movements. When compared to the three methods that performed best in the evaluation of Chapter 2, only the SOBI method which performed best in that evaluation shows similar results for some of the eye movement patterns. However, a serious limitation of the correction method is its inability to correct blink artifacts. In order to increase the variety of applications for which the new method can be used, the new correction should be improved in a way that enables it to correct the raw EEG for blinking artifacts. Chapter 4 deals with implementing such improvements based on the idea that a more advanced eye-tracker should be able to detect both the pupil position and the eyelid position. The improved eye tracker-based ocular artifact correction method is named EYE. Driven by some practical limitations regarding the eye tracking device currently available to us, an alternative way to estimate eyelid position is suggested, based on an EOG recorded above one eye. The EYE method can be used with both the eye tracker information or with the EOG substitute. On simulated data, accuracy of the EYE method is estimated using the EOGbased eyelid reference. This accuracy is again compared against the six other correction methods. Two different SNR-based measures of accuracy are proposed. One of these quantifies the correction of the entire simulated data set and the other focuses on those segments containing simulated blinking artifacts. After applying EYE, an average SNR of at least 9 dB for both these measures is achieved. This implies that the power of the corrected signal is at least eight times the power of the remaining noise. The simulated data sets contain a wide range of eye movements and blink frequencies. For almost all of these data sets, 16 out of 20, the correction results for EYE are better than for any of the other evaluated correction method. On experimental data, the EYE method appears to adequately correct for ocular artifacts as well. As the detection of eyelid position from the EOG is in principle inferior to the detection of eyelid position with the use of an eye tracker, these results should also be considered as an indicator of even higher accuracies that could be obtained with a more advanced eye tracker. Considering the simplicity of the MLR method, this method also performs remarkably well, which may explain why EOG-based regression is still often used for correction. In Chapter 5, the simulation model of Chapter 2 is put aside and, alternatively, experimentally recorded data is manipulated in a way that correction inaccuracies can be highlighted. Correction accuracies of eight correction methods, including EYE, are estimated based on data that are recorded during stop-signal tasks. In the analysis of these tasks it is essential that ocular artifacts are adequately removed because the task-related ERPs, are located mostly at frontal electrode positions and are low-amplitude. These data are corrected and subsequently evaluated. For the eight methods, the overall ranking of estimated accuracy in Figure 5.3, corresponds very well with the correction accuracy of these methods on simulated data as was found in Chapter 4. In a single-trial correction comparison, results suggest that the EYE corrected EEG, is not susceptible to overcorrection, whereas the other corrected EEGs are

    EOG-Based Eye Movement Classification and Application on HCI Baseball Game

    Full text link
    © 2013 IEEE. Electrooculography (EOG) is considered as the most stable physiological signal in the development of human-computer interface (HCI) for detecting eye-movement variations. EOG signal classification has gained more traction in recent years to overcome physical inconvenience in paralyzed patients. In this paper, a robust classification technique, such as eight directional movements is investigated by introducing a concept of buffer along with a variation of the slope to avoid misclassification effects in EOG signals. Blinking detection becomes complicated when the magnitude of the signals are considered. Hence, a correction technique is introduced to avoid misclassification for oblique eye movements. Meanwhile, a case study has been considered to apply these correction techniques to HCI baseball game to learn eye-movements

    Proposals and Comparisons from One-Sensor EEG and EOG Human-Machine Interfaces

    Get PDF
    [Abstract] Human-Machine Interfaces (HMI) allow users to interact with different devices such as computers or home elements. A key part in HMI is the design of simple non-invasive interfaces to capture the signals associated with the user’s intentions. In this work, we have designed two different approaches based on Electroencephalography (EEG) and Electrooculography (EOG). For both cases, signal acquisition is performed using only one electrode, which makes placement more comfortable compared to multi-channel systems. We have also developed a Graphical User Interface (GUI) that presents objects to the user using two paradigms—one-by-one objects or rows-columns of objects. Both interfaces and paradigms have been compared for several users considering interactions with home elements.Xunta de Galicia; ED431C 2020/15Xunta de Galicia; ED431G2019/01Agencia Estatal de Investigación de España; RED2018-102668-TAgencia Estatal de Investigación de España; PID2019-104958RB-C42Xunta de Galicia; ED481A-2018/156This work has been funded by the Xunta de Galicia (by grant ED431C 2020/15, and grant ED431G2019/01 to support the Centro de Investigación de Galicia “CITIC”), the Agencia Estatal de Investigación of Spain (by grants RED2018-102668-T and PID2019-104958RB-C42) and ERDF funds of the EU (FEDER Galicia & AEI/FEDER, UE); and the predoctoral Grant No. ED481A-2018/156 (Francisco Laport

    Multimodal Human Eye Blink Recognition Using Z-score Based Thresholding and Weighted Features

    Get PDF
    A novel real-time multimodal eye blink detection method using an amalgam of five unique weighted features extracted from the circle boundary formed from the eye landmarks is proposed. The five features, namely (Vertical Head Positioning, Orientation Factor, Proportional Ratio, Area of Intersection, and Upper Eyelid Radius), provide imperative gen (z score threshold) accurately predicting the eye status and thus the blinking status. An accurate and precise algorithm employing the five weighted features is proposed to predict eye status (open/close). One state-of-the-art dataset ZJU (eye-blink), is used to measure the performance of the method. Precision, recall, F1-score, and ROC curve measure the proposed method performance qualitatively and quantitatively. Increased accuracy (of around 97.2%) and precision (97.4%) are obtained compared to other existing unimodal approaches. The efficiency of the proposed method is shown to outperform the state-of-the-art methods

    Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography

    Get PDF
    The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for lightweight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user’s eye gaze
    • …
    corecore