1,286 research outputs found

    Brain-computer interface for robot control with eye artifacts for assistive applications

    Get PDF
    Human-robot interaction is a rapidly developing field and robots have been taking more active roles in our daily lives. Patient care is one of the fields in which robots are becoming more present, especially for people with disabilities. People with neurodegenerative disorders might not consciously or voluntarily produce movements other than those involving the eyes or eyelids. In this context, Brain-Computer Interface (BCI) systems present an alternative way to communicate or interact with the external world. In order to improve the lives of people with disabilities, this paper presents a novel BCI to control an assistive robot with user's eye artifacts. In this study, eye artifacts that contaminate the electroencephalogram (EEG) signals are considered a valuable source of information thanks to their high signal-to-noise ratio and intentional generation. The proposed methodology detects eye artifacts from EEG signals through characteristic shapes that occur during the events. The lateral movements are distinguished by their ordered peak and valley formation and the opposite phase of the signals measured at F7 and F8 channels. This work, as far as the authors' knowledge, is the first method that used this behavior to detect lateral eye movements. For the blinks detection, a double-thresholding method is proposed by the authors to catch both weak blinks as well as regular ones, differentiating itself from the other algorithms in the literature that normally use only one threshold. Real-time detected events with their virtual time stamps are fed into a second algorithm, to further distinguish between double and quadruple blinks from single blinks occurrence frequency. After testing the algorithm offline and in realtime, the algorithm is implemented on the device. The created BCI was used to control an assistive robot through a graphical user interface. The validation experiments including 5 participants prove that the developed BCI is able to control the robot

    A systematic review on artifact removal and classification techniques for enhanced MEG-based BCI systems

    Get PDF
    Neurological disease victims may be completely paralyzed and unable to move, but they may still be able to think. Their brain activity is the only means by which they can interact with their environment. Brain-Computer Interface (BCI) research attempts to create tools that support subjects with disabilities. Furthermore, BCI research has expanded rapidly over the past few decades as a result of the interest in creating a new kind of human-to-machine communication. As magnetoencephalography (MEG) has superior spatial and temporal resolution than other approaches, it is being utilized to measure brain activity non-invasively. The recorded signal includes signals related to brain activity as well as noise and artifacts from numerous sources. MEG can have a low signal-to-noise ratio because the magnetic fields generated by cortical activity are small compared to other artifacts and noise. By using the right techniques for noise and artifact detection and removal, the signal-to-noise ratio can be increased. This article analyses various methods for removing artifacts as well as classification strategies. Additionally, this offers a study of the influence of Deep Learning models on the BCI system. Furthermore, the various challenges in collecting and analyzing MEG signals as well as possible study fields in MEG-based BCI are examined

    Cheetah Experimental Platform Web 1.0: Cleaning Pupillary Data

    Get PDF
    Recently, researchers started using cognitive load in various settings, e.g., educational psychology, cognitive load theory, or human-computer interaction. Cognitive load characterizes a tasks' demand on the limited information processing capacity of the brain. The widespread adoption of eye-tracking devices led to increased attention for objectively measuring cognitive load via pupil dilation. However, this approach requires a standardized data processing routine to reliably measure cognitive load. This technical report presents CEP-Web, an open source platform to providing state of the art data processing routines for cleaning pupillary data combined with a graphical user interface, enabling the management of studies and subjects. Future developments will include the support for analyzing the cleaned data as well as support for Task-Evoked Pupillary Response (TEPR) studies

    VME-DWT : an efficient algorithm for detection and elimination of eye blink from short segments of single EEG channel

    Get PDF
    Objective: Recent advances in development of low-cost single-channel electroencephalography (EEG) headbands have opened new possibilities for applications in health monitoring and brain-computer interface (BCI) systems. These recorded EEG signals, however, are often contaminated by eye blink artifacts that can yield the fallacious interpretation of the brain activity. This paper proposes an efficient algorithm, VME-DWT, to remove eye blinks in a short segment of the single EEG channel. Method: The proposed algorithm: (a) locates eye blink intervals using Variational Mode Extraction (VME) and (b) filters only contaminated EEG interval using an automatic Discrete Wavelet Transform (DWT) algorithm. The performance of VME-DWT is compared with an automatic Variational Mode Decomposition (AVMD) and a DWT-based algorithms, proposed for suppressing eye blinks in a short segment of the single EEG channel. Results: The VME-DWT detects and filters 95% of the eye blinks from the contaminated EEG signals with SNR ranging from −8 to +3 dB. The VME-DWT shows superiority to the AVMD and DWT with the higher mean value of correlation coefficient (0.92 vs. 0.83, 0.58) and lower mean value of RRMSE (0.42 vs. 0.59, 0.87). Significance: The VME-DWT can be a suitable algorithm for removal of eye blinks in low-cost single-channel EEG systems as it is: (a) computationally-efficient, the contaminated EEG signal is filtered in millisecond time resolution, (b) automatic, no human intervention is required, (c) low-invasive, EEG intervals without contamination remained unaltered, and (d) low-complexity, without need to the artifact reference

    Analysis of Small Muscle Movement Effects on EEG Signals

    Get PDF
    In this thesis, the artefactual effects of the small muscle movements were investigated. Upper frequency bands (30 Hz) of the EEG signal were extracted in order to investigate the artefactual effects of the small muscle movements. When the contamination level is high, the detection of the small muscle artifact can be made with the 92.2% accuracy. If these artifacts are really small such as a single finger movement, the detection accuracy decreases to 64%. But, the detection accuracy increases to 72% after removing the eye blink artifacts. The results of the classification support our hypothesis about the artefactual effects of the small muscle movements

    Online Alpha Wave detector: an Embedded hardware-software implementation

    Get PDF
    The recent trend on embedded system development opens a new prospect for applications that in the past were not possible. The eye tracking for sleep and fatigue detection has become an important and useful application in industrial and automotive scenarios since fatigue is one of the most prevalent causes of earth-moving equipment accidents. Typical applications such as cameras, accelerometers and dermal analyzers are present on the market but have some inconvenient. This thesis project has used EEG signal, particularly, alpha waves, to overcome them by using an embedded software-hardware implementation to detect these signals in real tim

    A hybrid brain-computer interface based on motor intention and visual working memory

    Get PDF
    Non-invasive electroencephalography (EEG) based brain-computer interface (BCI) is able to provide alternative means for people with disabilities to communicate with and control over external assistive devices. A hybrid BCI is designed and developed for following two types of system (control and monitor). Our first goal is to create a signal decoding strategy that allows people with limited motor control to have more command over potential prosthetic devices. Eight healthy subjects were recruited to perform visual cues directed reaching tasks. Eye and motion artifacts were identified and removed to ensure that the subjects\u27 visual fixation to the target locations would have little or no impact on the final result. We applied a Fisher Linear Discriminate (FLD) analysis for single-trial classification of the EEG to decode the intended arm movement in the left, right, and forward directions (before the onsets of actual movements). The mean EEG signal amplitude near the PPC region 271-310 ms after visual stimulation was found to be the dominant feature for best classification results. A signal scaling factor developed was found to improve the classification accuracy from 60.11% to 93.91% in the two-class (left versus right) scenario. This result demonstrated great promises for BCI neuroprosthetics applications, as motor intention decoding can be served as a prelude to the classification of imagined motor movement to assist in motor disable rehabilitation, such as prosthetic limb or wheelchair control. The second goal is to develop the adaptive training for patients with low visual working memory (VWM) capacity to improve cognitive abilities and healthy individuals who seek to enhance their intellectual performance. VWM plays a critical role in preserving and processing information. It is associated with attention, perception and reasoning, and its capacity can be used as a predictor of cognitive abilities. Recent evidence has suggested that with training, one can enhance the VWM capacity and attention over time. Not only can these studies reveal the characteristics of VWM load and the influences of training, they may also provide effective rehabilitative means for patients with low VWM capacity. However, few studies have investigated VWM over a long period of time, beyond 5-weeks. In this study, a combined behavioral approach and EEG was used to investigate VWM load, gain, and transfer. The results reveal that VWM capacity is directly correlated to the reaction time and contralateral delay amplitude (CDA). The approximate magic number 4 was observed through the event-related potentials (ERPs) waveforms, where the average capacity is 2.8-item from 15 participants. In addition, the findings indicate that VWM capacity can be improved through adaptive training. Furthermore, after training exercises, participants from the training group are able to improve their performance accuracies dramatically compared to the control group. Adaptive training gains on non-trained tasks can also be observed at 12 weeks after training. Therefore, we conclude that all participants can benefit from training gains, and augmented VWM capacity can be sustained over a long period of time. Our results suggest that this form of training can significantly improve cognitive function and may be useful for enhancing the user performance on neuroprosthetics device

    A real-time noise cancelling EEG electrode employing Deep Learning

    Get PDF
    Two major problems of head worn electroencephalogram (EEG) are muscle and eye-blink artefacts, in particular in non-clinical environments while performing everyday tasks. Current artefact removal techniques such as principle component analysis (PCA) or independent component analysis (ICA) take signals from a high number of electrodes and separate the noise from the signal by processing them offline in a computationally expensive and slow way. In contrast, we present a smart compound electrode which is able to learn in real-time to remove artefacts. The smart 3D printed electrode consists of a central electrode and a ring electrode where poly-lactate acid (PLA) was used for the the base and Ag/AgCl for the conductive parts allowing standard manufacturing processes. A new deep learning algorithm then learns continuously to remove both eye-blink and muscle artefacts which combines the real-time capabilities of adaptive filters with the power of deep neural networks. The electrode setup together with the deep learning algorithm increases the signal to noise ratio of the EEG in average by 20 dB. Our approach offers a simple 3D printed design in combination with a real-time algorithm which can be integrated into the electrode itself. This electrode has the potential to provide high quality EEG in non-clinical and consumer applications, such as sleep monitoring and brain-computer interface (BCI).Comment: 12 pages, 4 figures, code available under http://doi.org/10.5281/zenodo.413110

    Consciousness Levels Detection Using Discrete Wavelet Transforms on Single Channel EEG Under Simulated Workload Conditions

    Get PDF
    EEG signal is one of the most complex signals having the lowest amplitude which makes it challenging for analysis in real-time. The different waveforms like alpha, beta, theta and delta were studied and selected features were related with the consciousness levels. The consciousness levels detection is useful for estimating the subjects’ performance in certain selected tasks which requires high alertness. This estimation was performed by analyzing signal properties of the EEG using features extracted through discrete wavelet transform with a moving window of 10 seconds with 90% overlap. The EEG signal is decomposed in to wavelets and the average energy and power of the coefficients related to the EEG bands is taken as the features. The data is collected from standard EEG machine from the volunteers as per the protocol. C3 and C4 locations (unipolar) of the standard 10-20 electrode system were selected. The central region of the brain is most optimal location for the consciousness levels detection. The estimation of the data using Discrete Wavelet Transform (DWT) energy, power features provided better accuracy when the central regions were chosen. An accuracy of 99% was achieved when the algorithm was implemented using a classifier based on linear kernel support vector machines (SVM)

    Human-machine interfaces based on EMG and EEG applied to robotic systems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Two different Human-Machine Interfaces (HMIs) were developed, both based on electro-biological signals. One is based on the EMG signal and the other is based on the EEG signal. Two major features of such interfaces are their relatively simple data acquisition and processing systems, which need just a few hardware and software resources, so that they are, computationally and financially speaking, low cost solutions. Both interfaces were applied to robotic systems, and their performances are analyzed here. The EMG-based HMI was tested in a mobile robot, while the EEG-based HMI was tested in a mobile robot and a robotic manipulator as well.</p> <p>Results</p> <p>Experiments using the EMG-based HMI were carried out by eight individuals, who were asked to accomplish ten eye blinks with each eye, in order to test the eye blink detection algorithm. An average rightness rate of about 95% reached by individuals with the ability to blink both eyes allowed to conclude that the system could be used to command devices. Experiments with EEG consisted of inviting 25 people (some of them had suffered cases of meningitis and epilepsy) to test the system. All of them managed to deal with the HMI in only one training session. Most of them learnt how to use such HMI in less than 15 minutes. The minimum and maximum training times observed were 3 and 50 minutes, respectively.</p> <p>Conclusion</p> <p>Such works are the initial parts of a system to help people with neuromotor diseases, including those with severe dysfunctions. The next steps are to convert a commercial wheelchair in an autonomous mobile vehicle; to implement the HMI onboard the autonomous wheelchair thus obtained to assist people with motor diseases, and to explore the potentiality of EEG signals, making the EEG-based HMI more robust and faster, aiming at using it to help individuals with severe motor dysfunctions.</p
    corecore