1,092 research outputs found

    Automated Classification for Electrophysiological Data: Machine Learning Approaches for Disease Detection and Emotion Recognition

    Get PDF
    Smart healthcare is a health service system that utilizes technologies, e.g., artificial intelligence and big data, to alleviate the pressures on healthcare systems. Much recent research has focused on the automatic disease diagnosis and recognition and, typically, our research pays attention on automatic classifications for electrophysiological signals, which are measurements of the electrical activity. Specifically, for electrocardiogram (ECG) and electroencephalogram (EEG) data, we develop a series of algorithms for automatic cardiovascular disease (CVD) classification, emotion recognition and seizure detection. With the ECG signals obtained from wearable devices, the candidate developed novel signal processing and machine learning method for continuous monitoring of heart conditions. Compared to the traditional methods based on the devices at clinical settings, the developed method in this thesis is much more convenient to use. To identify arrhythmia patterns from the noisy ECG signals obtained through the wearable devices, CNN and LSTM are used, and a wavelet-based CNN is proposed to enhance the performance. An emotion recognition method with a single channel ECG is developed, where a novel exploitative and explorative GWO-SVM algorithm is proposed to achieve high performance emotion classification. The attractive part is that the proposed algorithm has the capability to learn the SVM hyperparameters automatically, and it can prevent the algorithm from falling into local solutions, thereby achieving better performance than existing algorithms. A novel EEG-signal based seizure detector is developed, where the EEG signals are transformed to the spectral-temporal domain, so that the dimension of the input features to the CNN can be significantly reduced, while the detector can still achieve superior detection performance

    CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis

    Get PDF
    Recognizing emotional state of human using brain signal is an active research domain with several open challenges. In this research, we propose a signal spectrogram image based CNN-XGBoost fusion method for recognising three dimensions of emotion, namely arousal (calm or excitement), valence (positive or negative feeling) and dominance (without control or empowered). We used a benchmark dataset called DREAMER where the EEG signals were collected from multiple stimulus along with self-evaluation ratings. In our proposed method, we first calculate the Short-Time Fourier Transform (STFT) of the EEG signals and convert them into RGB images to obtain the spectrograms. Then we use a two dimensional Convolutional Neural Network (CNN) in order to train the model on the spectrogram images and retrieve the features from the trained layer of the CNN using a dense layer of the neural network. We apply Extreme Gradient Boosting (XGBoost) classifier on extracted CNN features to classify the signals into arousal, valence and dominance of human emotion. We compare our results with the feature fusion-based state-of-the-art approaches of emotion recognition. To do this, we applied various feature extraction techniques on the signals which include Fast Fourier Transformation, Discrete Cosine Transformation, Poincare, Power Spectral Density, Hjorth parameters and some statistical features. Additionally, we use Chi-square and Recursive Feature Elimination techniques to select the discriminative features. We form the feature vectors by applying feature level fusion, and apply Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost) classifiers on the fused features to classify different emotion levels. The performance study shows that the proposed spectrogram image based CNN-XGBoost fusion method outperforms the feature fusion-based SVM and XGBoost methods. The proposed method obtained the accuracy of 99.712% for arousal, 99.770% for valence and 99.770% for dominance in human emotion detection.publishedVersio

    GCNs-Net: A Graph Convolutional Neural Network Approach for Decoding Time-resolved EEG Motor Imagery Signals

    Full text link
    Towards developing effective and efficient brain-computer interface (BCI) systems, precise decoding of brain activity measured by electroencephalogram (EEG), is highly demanded. Traditional works classify EEG signals without considering the topological relationship among electrodes. However, neuroscience research has increasingly emphasized network patterns of brain dynamics. Thus, the Euclidean structure of electrodes might not adequately reflect the interaction between signals. To fill the gap, a novel deep learning framework based on the graph convolutional neural networks (GCNs) was presented to enhance the decoding performance of raw EEG signals during different types of motor imagery (MI) tasks while cooperating with the functional topological relationship of electrodes. Based on the absolute Pearson's matrix of overall signals, the graph Laplacian of EEG electrodes was built up. The GCNs-Net constructed by graph convolutional layers learns the generalized features. The followed pooling layers reduce dimensionality, and the fully-connected softmax layer derives the final prediction. The introduced approach has been shown to converge for both personalized and group-wise predictions. It has achieved the highest averaged accuracy, 93.056% and 88.57% (PhysioNet Dataset), 96.24% and 80.89% (High Gamma Dataset), at the subject and group level, respectively, compared with existing studies, which suggests adaptability and robustness to individual variability. Moreover, the performance was stably reproducible among repetitive experiments for cross-validation. To conclude, the GCNs-Net filters EEG signals based on the functional topological relationship, which manages to decode relevant features for brain motor imagery
    • …
    corecore