5 research outputs found

    Brain-computer interface of focus and motor imagery using wavelet and recurrent neural networks

    Get PDF
    Brain-computer interface is a technology that allows operating a device without involving muscles and sound, but directly from the brain through the processed electrical signals. The technology works by capturing electrical or magnetic signals from the brain, which are then processed to obtain information contained therein. Usually, BCI uses information from electroencephalogram (EEG) signals based on various variables reviewed. This study proposed BCI to move external devices such as a drone simulator based on EEG signal information. From the EEG signal was extracted to get motor imagery (MI) and focus variable using wavelet. Then, they were classified by recurrent neural networks (RNN). In overcoming the problem of vanishing memory from RNN, was used long short-term memory (LSTM). The results showed that BCI used wavelet, and RNN can drive external devices of non-training data with an accuracy of 79.6%. The experiment gave AdaDelta model is better than the Adam model in terms of accuracy and value losses. Whereas in computational learning time, Adam's model is faster than AdaDelta's model

    Generative Autoencoder Kernels on Deep Learning for Brain Activity Analysis

    Get PDF
    Deep Learning (DL) is a two-step classification model that consists feature learning, generating feature representations using unsupervised ways and the supervised learning stage at the last step of model using at least two hidden layers on the proposed structures by fully connected layers depending on of the artificial neural networks. The optimization of the predefined classification parameters for the supervised models eases reaching the global optimality with exact zero training error. The autoencoder (AE) models are the highly generalized ways of the unsupervised stages for the DL to define the output weights of the hidden neurons with various representations. As alternatively to the conventional Extreme Learning Machines (ELM) AE, Hessenberg decomposition-based ELM autoencoder (HessELM-AE) is a novel kernel to generate different presentations of the input data within the intended sizes of the models. The aim of the study is analyzing the performance of the novel Deep AE kernel for clinical availability on electroencephalogram (EEG) with stroke patients. The slow cortical potentials (SCP) training in stroke patients during eight neurofeedback sessions were analyzed using Hilbert-Huang Transform. The statistical features of different frequency modulations were fed into the Deep ELM model for generative AE kernels. The novel Deep ELM-AE kernels have discriminated the brain activity with high classification performances for positivity and negativity tasks in stroke patients

    Recognition of regions of stroke injury using multi-modal frequency features of electroencephalogram

    Get PDF
    ObjectiveNowadays, increasingly studies are attempting to analyze strokes in advance. The identification of brain damage areas is essential for stroke rehabilitation.ApproachWe proposed Electroencephalogram (EEG) multi-modal frequency features to classify the regions of stroke injury. The EEG signals were obtained from stroke patients and healthy subjects, who were divided into right-sided brain injury group, left-sided brain injury group, bilateral brain injury group, and healthy controls. First, the wavelet packet transform was used to perform a time-frequency analysis of the EEG signal and extracted a set of features (denoted as WPT features). Then, to explore the nonlinear phase coupling information of the EEG signal, phase-locked values (PLV) and partial directed correlations (PDC) were extracted from the brain network, and the brain network produced a second set of features noted as functional connectivity (FC) features. Furthermore, we fused the extracted multiple features and used the resnet50 convolutional neural network to classify the fused multi-modal (WPT + FC) features.ResultsThe classification accuracy of our proposed methods was up to 99.75%.SignificanceThe proposed multi-modal frequency features can be used as a potential indicator to distinguish regions of brain injury in stroke patients, and are potentially useful for the optimization of decoding algorithms for brain-computer interfaces

    Study of non-invasive cognitive tasks and feature extraction techniques for brain-computer interface (BCI) applications

    Get PDF
    A brain-computer interface (BCI) provides an important alternative for disabled people that enables the non-muscular communication pathway among individual thoughts and different assistive appliances. A BCI technology essentially consists of data acquisition, pre-processing, feature extraction, classification and device command. Indeed, despite the valuable and promising achievements already obtained in every component of BCI, the BCI field is still a relatively young research field and there is still much to do in order to make BCI become a mature technology. To mitigate the impediments concerning BCI, the study of cognitive task together with the EEG feature and classification framework have been investigated. There are four distinct experiments have been conducted to determine the optimum solution to those specific issues. In the first experiment, three cognitive tasks namely quick math solving, relaxed and playing games have been investigated. The features have been extracted using power spectral density (PSD), logenergy entropy, and spectral centroid and the extracted feature has been classified through the support vector machine (SVM), K-nearest neighbor (K-NN), and linear discriminant analysis (LDA). In this experiment, the best classification accuracy for single channel and five channel datasets were 86% and 91.66% respectively that have been obtained by the PSD-SVM approach. The wink based facial expressions namely left wink, right wink and no wink have been studied through fast Fourier transform (FFT) and sample range feature and then the extracted features have been classified using SVM, K-NN, and LDA. The best accuracy (98.6%) has been achieved by the sample range-SVM based approach. The eye blinking based facial expression has been investigated following the same methodology as the study of wink based facial expression. Moreover, the peak detection approach has also been employed to compute the number of blinks. The optimum accuracy of 99% has been achieved using the peak detection approach. Additionally, twoclass motor imagery hand movement has been classified using SVM, K-NN, and LDA where the feature has been extracted through PSD, spectral centroid and continuous wavelet transform (CWT). The optimum 74.7% accuracy has been achieved by the PSDSVM approach. Finally, two device command prototypes have been designed to translate the classifier output. One prototype can translate four types of cognitive tasks in terms of 5 watts four different colored bulbs, whereas, another prototype may able to control DC motor utilizing cognitive tasks. This study has delineated the implementation of every BCI component to facilitate the application of brainwave assisted assistive appliances. Finally, this thesis comes to the end by drawing the future direction regarding the current issues of BCI technology and these directions may significantly enhance usability for the implementation of commercial applications not only for the disabled but also for a significant number of healthy users
    corecore