Brain signal recognition using deep learning

Abstract

This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel UniversityBrain Computer Interface (BCI) has the potential to offer a new generation of applications independent of muscular activity and controlled by the human brain. Brain imaging technologies are used to transfer the cognitive tasks into control commands for a BCI system. The electroencephalography (EEG) technology serves as the best available non-invasive solution for extracting signals from the brain. On the other hand, speech is the primary means of communication, but for patients suffering from locked-in syndrome, there is no easy way to communicate. Therefore, an ideal communication system for locked-in patients is a thought-to-speech BCI system. This research aims to investigate methods for the recognition of imagined speech from EEG signals using deep learning techniques. In order to design an optimal imagined speech recognition BCI, variety of issues have been solved. These include 1) proposing new feature extraction and classification framework for recognition of imagined speech from EEG signals, 2) grammatical class recognition of imagined words from EEG signals, 3) discriminating different cognitive tasks associated with speech in the brain such as overt speech, covert speech, and visual imagery. In this work machine learning, deep learning methods were used to analyze EEG signals. For recognition of imagined speech from EEG signals, a new EEG database was collected while the participants mentally spoke (imagined speech) the presented words. Along with imagined speech, EEG data was recorded for visual imagery (imagining a scene or an image) and overt speech (verbal speech). Spectro-temporal and spatio-temporal domain features were investigated for the classification of imagined words from EEG signals. Further, a deep learning framework using the convolutional network and attention mechanism was implemented for learning features in the spatial, temporal, and spectral domains. The method achieved a recognition rate of 76.6% for three binary word pairs. These experiments show that deep learning algorithms are ideal for imagined speech recognition from EEG signals due to their ability to interpret features from non-linear and non-stationary signals. Grammatical classes of imagined words from EEG signals were also recognized using a multi-channel convolution network framework. This method was extended to a multi-level recognition system for multi-class classification of imagined words which achieved an accuracy of 52.9% for 10 words, which is much better in comparison to previous work. In order to investigate the difference between imagined speech with verbal speech and visual imagery from EEG signals, we used multivariate pattern analysis (MVPA). MVPA provided the time segments when the neural oscillation for the different cognitive tasks was linearly separable. Further, frequencies that result in most discrimination between the different cognitive tasks were also explored. A framework was proposed to discriminate two cognitive tasks based on the spatio-temporal patterns in EEG signals. The proposed method used the K-means clustering algorithm to find the best electrode combination and convolutional-attention network for feature extraction and classification. The proposed method achieved a high recognition rate of 82.9% and 77.7%. The results in this research suggest that a communication based BCI system can be designed using deep learning methods. Further, this work add knowledge to the existing work in the field of communication based BCI system

    Similar works