12,824 research outputs found

    Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms

    Get PDF
    Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance

    Detection of EEG K-complexes using fractal dimension of time-frequency images technique coupled with undirected graph features

    Get PDF
    K-complexes identification is a challenging task in sleep research. The detection of k-complexes in electroencephalogram (EEG) signals based on visual inspection is time consuming, prone to errors, and requires well-trained knowledge. Many existing methods for k-complexes detection rely mainly on analyzing EEG signals in time and frequency domains. In this study, an efficient method is proposed to detect k-complexes from EEG signals based on fractal dimension (FD) of time frequency (T-F) images coupled with undirected graph features. Firstly, an EEG signal is partitioned into smaller segments using a sliding window technique. Each EEG segment is passed through a spectrogram of short time Fourier transform (STFT) to obtain the T-F images. Secondly, the box counting method is applied to each T-F image to discover the FDs in EEG signals. A vector of FD features are extracted from each T-F image and then mapped into an undirected graph. The structural properties of the graphs are used as the representative features of the original EEG signals for the input of a least square support vector machine (LS-SVM) classifier. Key graphic features are extracted from the undirected graphs. The extracted graph features are forwarded to the LS-SVM for classification. To investigate the classification ability of the proposed feature extraction combined with the LS-SVM classifier, the extracted features are also forwarded to a k-means classifier for comparison. The proposed method is compared with several existing k-complexes detection methods in which the same datasets were used. The findings of this study shows that the proposed method yields better classification results than other existing methods in the literature. An average accuracy of 97% for the detection of the k-complexes is obtained using the proposed method. The proposed method could lead to an efficient tool for the scoring of automatic sleep stages which could be useful for doctors and neurologists in the diagnosis and treatment of sleep disorders and for sleep research

    Emotion Recognition from Electroencephalogram Signals based on Deep Neural Networks

    Get PDF
    Emotion recognition using deep learning methods through electroencephalogram (EEG) analysis has marked significant progress. Nevertheless, the complexities and time-intensive nature of EEG analysis present challenges. This study proposes an efficient EEG analysis method that foregoes feature extraction and sliding windows, instead employing one-dimensional Neural Networks for emotion classification. The analysis utilizes EEG signals from the Database for Emotion Analysis using Physiological Signals (DEAP) and focuses on thirteen EEG electrode positions closely associated with emotion changes. Three distinct Neural Models are explored for emotion classification: two Convolutional Neural Networks (CNN) and a combined approach using Convolutional Neural Networks and Long Short-Term Memory (CNN-LSTM). Additionally, two emotion labels are considered: four emotional ranges encompassing low arousal and low valence (LALV), low arousal and high valence (LAHV), high arousal and high valence (HAHV), and high arousal and low valence (HALV); and high valence (HV) and low valence (LV). Results demonstrate CNN_1 achieving an average accuracy of 97.7% for classifying four emotional ranges, CNN_2 with 97.1%, and CNN-LSTM reaching an impressive 99.5%. Notably, in classifying HV and LV labels, our methods attained remarkable accuracies of 100%, 98.8%, and 99.7% for CNN_1, CNN_2, and CNN-LSTM, respectively. The performance of our models surpasses that of previously reported studies, showcasing their potential as highly effective classifiers for emotion recognition using EEG signals

    Application of biosignal-driven intelligent systems for multifunction prosthesis control

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.Prosthetic devices aim to provide an artificial alternative to missing limbs. The controller for such devices is usually driven by the biosignals generated by the human body, particularly Electromyogram (EMG) or Electroencephalogram (EEG) signals. Such a controller utilizes a pattern recognition approach to classify the EMG signal recorded from the human muscles or the EEG signal from the brain. The aim of this thesis is to improve the EMG and EEG pattern classification accuracy. Due to the fact that the success of pattern recognition based biosignal driven systems highly depends on the quality of extracted features, a number of novel, robust, hybrid and innovative methods are proposed to achieve better performance. These methods are developed to effectively tackle many of the limitations of existing systems, in particular feature representation and dimensionality reduction. A set of knowledge extraction methods that can accurately and rapidly identify the most important attributes for classifying the arm movements are formulated. This is accomplished through the following: 1. Developing a new feature extraction technique that can identify the most important features from the high-dimensional time-frequency representation of the multichannel EMG and EEG signals. For this task, an information content estimation method using fuzzy entropies and fuzzy mutual information is proposed to identify the optimal wravelet packet transform decomposition for classification. 2. Developing a powerful variable (feature or channel) selection paradigm to improve the performance of multi-channel EMG and EEG driven systems. This will eventually lead to the development of a combined channel and feature selection technique as one possible scheme for dimensionality reduction. Two novel feature selection methods are developed under this scheme utilizing the ant colony arid differential evolution optimization techniques. The differential evolution optimization technique is further modified in a novel attempt in employing a float optimizer for the combinatorial task of feature selection, proving powerful performance by both methods. 3. Developing two feature projection techniques that extract a small subset of highly informative discriminant features, thus acting as an alternative scheme for dimensionality reduction. The two methods represent novel variations to fuzzy discriminant analysis based projection techniques. In addition, an extension to the non-linear discriminant analysis is proposed based on a mixture of differential evolution and fuzzy discriminant analysis. The testing and verification process of the proposed methods on different EMG and EEG datasets provides very encouraging results

    An Approach toward Artificial Intelligence Alzheimer's Disease Diagnosis Using Brain Signals

    Get PDF
    Background: Electroencephalography (EEG) signal analysis is a rapid, low-cost, and practical method for diagnosing the early stages of dementia, including mild cognitive impairment (MCI) and Alzheimer’s disease (AD). The extraction of appropriate biomarkers to assess a subject’s cognitive impairment has attracted a lot of attention in recent years. The aberrant progression of AD leads to cortical detachment. Due to the interaction of several brain areas, these disconnections may show up as abnormalities in functional connectivity and complicated behaviors. Methods: This work suggests a novel method for differentiating between AD, MCI, and HC in two-class and three-class classifications based on EEG signals. To solve the class imbalance, we employ EEG data augmentation techniques, such as repeating minority classes using variational autoencoders (VAEs), as well as traditional noise-addition methods and hybrid approaches. The power spectrum density (PSD) and temporal data employed in this study’s feature extraction from EEG signals were combined, and a support vector machine (SVM) classifier was used to distinguish between three categories of problems. Results: Insufficient data and unbalanced datasets are two common problems in AD datasets. This study has shown that it is possible to generate comparable data using noise addition and VAE, train the model using these data, and, to some extent, overcome the aforementioned issues with an increase in classification accuracy of 2 to 7%. Conclusion: In this work, using EEG data, we were able to successfully detect three classes: AD, MCI, and HC. In comparison to the pre-augmentation stage, the accuracy gained in the classification of the three classes increased by 3% when the VAE model added additional data. As a result, it is clear how useful EEG data augmentation methods are for classes with smaller sample numbers

    EEG sleep stages identification based on weighted undirected complex networks

    Get PDF
    Sleep scoring is important in sleep research because any errors in the scoring of the patient's sleep electroencephalography (EEG) recordings can cause serious problems such as incorrect diagnosis, medication errors, and misinterpretations of patient's EEG recordings. The aim of this research is to develop a new automatic method for EEG sleep stages classification based on a statistical model and weighted brain networks. Methods each EEG segment is partitioned into a number of blocks using a sliding window technique. A set of statistical features are extracted from each block. As a result, a vector of features is obtained to represent each EEG segment. Then, the vector of features is mapped into a weighted undirected network. Different structural and spectral attributes of the networks are extracted and forwarded to a least square support vector machine (LS-SVM) classifier. At the same time the network's attributes are also thoroughly investigated. It is found that the network's characteristics vary with their sleep stages. Each sleep stage is best represented using the key features of their networks. Results In this paper, the proposed method is evaluated using two datasets acquired from different channels of EEG (Pz-Oz and C3-A2) according to the R&K and the AASM without pre-processing the original EEG data. The obtained results by the LS-SVM are compared with those by Naïve, k-nearest and a multi-class-SVM. The proposed method is also compared with other benchmark sleep stages classification methods. The comparison results demonstrate that the proposed method has an advantage in scoring sleep stages based on single channel EEG signals. Conclusions An average accuracy of 96.74% is obtained with the C3-A2 channel according to the AASM standard, and 96% with the Pz-Oz channel based on the R&K standard

    Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals

    Full text link
    An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subjects active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems, is challenging due to the significant noise inherent in the signals and the lack of informative correlation between the signals and brain activities. In this paper, we propose a novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities. We design a joint convolutional recurrent neural network that simultaneously learns robust high-level feature presentations through low-dimensional dense embeddings from raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various artifacts such as background activities. The proposed approach has been evaluated extensively on a large- scale public MI-EEG dataset and a limited but easy-to-deploy dataset collected in our lab. The results show that our approach outperforms a series of baselines and the competitive state-of-the- art methods, yielding a classification accuracy of 95.53%. The applicability of our proposed approach is further demonstrated with a practical BCI system for typing.Comment: 10 page
    • …
    corecore