1,239 research outputs found

    Motor imagery task classification using a signal-dependent orthogonal transform based feature extraction

    Full text link
    © Springer International Publishing Switzerland 2015. In this paper, we present the results of classifying electroencephalographic (EEG) signals into four motor imagery tasks using a new method for feature extraction. This method is based on a signal-dependent orthogonal transform, referred to as LP-SVD, defined as the left singular vectors of the LPC filter impulse response matrix. Using a logistic tree based model classifier, the extracted features are mapped into one of four motor imagery movements, namely left hand, right hand, foot, and tongue. The proposed technique-based classification performance was benchmarked against those based on two widely used linear transform for feature extraction methods, namely discrete cosine transform (DCT) and adaptive autoregressive (AAR). By achieving an accuracy of 67.35 %, the LP-SVD based method outperformed the other two by large margins (+25 % compared to DCT and +6 % compared to AAR-based methods)

    Wavelet Lifting over Information-Based EEG Graphs for Motor Imagery Data Classification

    Get PDF
    The imagination of limb movements offers an intuitive paradigm for the control of electronic devices via brain computer interfacing (BCI). The analysis of electroencephalographic (EEG) data related to motor imagery potentials has proved to be a difficult task. EEG readings are noisy, and the elicited patterns occur in different parts of the scalp, at different instants and at different frequencies. Wavelet transform has been widely used in the BCI field as it offers temporal and spectral capabilities, although it lacks spatial information. In this study we propose a tailored second generation wavelet to extract features from these three domains. This transform is applied over a graph representation of motor imaginary trials, which encodes temporal and spatial information. This graph is enhanced using per-subject knowledge in order to optimise the spatial relationships among the electrodes, and to improve the filter design. This method improves the performance of classifying different imaginary limb movements maintaining the low computational resources required by the lifting transform over graphs. By using an online dataset we were able to positively assess the feasibility of using the novel method in an online BCI context

    A Comparative Analysis of EEG-based Stress Detection Utilizing Machine Learning and Deep Learning Classifiers with a Critical Literature Review

    Get PDF
    Background: Mental stress is considered to be a major contributor to different psychological and physical diseases. Different socio-economic issues, competition in the workplace and amongst the students, and a high level of expectations are the major causes of stress. This in turn transforms into several diseases and may extend to dangerous stages if not treated properly and timely, causing the situations such as depression, heart attack, and suicide. This stress is considered to be a very serious health abnormality. Stress is to be recognized and managed before it ruins the health of a person. This has motivated the researchers to explore the techniques for stress detection. Advanced machine learning and deep learning techniques are to be investigated for stress detection.  Methodology: A survey of different techniques used for stress detection is done here. Different stages of detection including pre-processing, feature extraction, and classification are explored and critically reviewed. Electroencephalogram (EEG) is the main parameter considered in this study for stress detection. After reviewing the state-of-the-art methods for stress detection, a typical methodology is implemented, where feature extraction is done by using principal component analysis (PCA), ICA, and discrete cosine transform. After the feature extraction, some state-of-art machine learning classifiers are employed for classification including support vector machine (SVM), K-nearest neighbor (KNN), NB, and CT. In addition to these classifiers, a typical deep-learning classifier is also utilized for detection purposes. The dataset used for the study is the Database for Emotion Analysis using Physiological Signals (DEAP) dataset. Results: Different performance measures are considered including precision, recall, F1-score, and accuracy. PCA with KNN, CT, SVM and NB have given accuracies of 65.7534%, 58.9041%, 61.6438%, and 57.5342% respectively. With ICA as feature extractor accuracies obtained are 58.9041%, 61.64384%, 57.5342%, and 54.79452% for the classifiers KNN, CT, SVM, and NB respectively. DCT is also considered a feature extractor with classical machine learning algorithms giving the accuracies of 56.16438%, 50.6849%, 54.7945%, and 45.2055% for the classifiers KNN, CT, SVM, and NB respectively. A conventional DCNN classification is performed given an accuracy of 76% and precision, recall, and F1-score of 0.66, 0.77, and 0.64 respectively. Conclusion: For EEG-based stress detection, different state-of-the-art machine learning and deep learning methods are used along with different feature extractors such as PCA, ICA, and DCT. Results show that the deep learning classifier gives an overall accuracy of 76%, which is a significant improvement over classical machine learning techniques with the accuracies as PCA+ KNN (65.75%), DCT+KNN (56.16%), and ICA+CT (61.64%)

    CNN-XGBoost fusion-based affective state recognition using EEG spectrogram image analysis

    Get PDF
    Recognizing emotional state of human using brain signal is an active research domain with several open challenges. In this research, we propose a signal spectrogram image based CNN-XGBoost fusion method for recognising three dimensions of emotion, namely arousal (calm or excitement), valence (positive or negative feeling) and dominance (without control or empowered). We used a benchmark dataset called DREAMER where the EEG signals were collected from multiple stimulus along with self-evaluation ratings. In our proposed method, we first calculate the Short-Time Fourier Transform (STFT) of the EEG signals and convert them into RGB images to obtain the spectrograms. Then we use a two dimensional Convolutional Neural Network (CNN) in order to train the model on the spectrogram images and retrieve the features from the trained layer of the CNN using a dense layer of the neural network. We apply Extreme Gradient Boosting (XGBoost) classifier on extracted CNN features to classify the signals into arousal, valence and dominance of human emotion. We compare our results with the feature fusion-based state-of-the-art approaches of emotion recognition. To do this, we applied various feature extraction techniques on the signals which include Fast Fourier Transformation, Discrete Cosine Transformation, Poincare, Power Spectral Density, Hjorth parameters and some statistical features. Additionally, we use Chi-square and Recursive Feature Elimination techniques to select the discriminative features. We form the feature vectors by applying feature level fusion, and apply Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost) classifiers on the fused features to classify different emotion levels. The performance study shows that the proposed spectrogram image based CNN-XGBoost fusion method outperforms the feature fusion-based SVM and XGBoost methods. The proposed method obtained the accuracy of 99.712% for arousal, 99.770% for valence and 99.770% for dominance in human emotion detection.publishedVersio

    Ensemble approach on enhanced compressed noise EEG data signal in wireless body area sensor network

    Get PDF
    The Wireless Body Area Sensor Network (WBASN) is used for communication among sensor nodes operating on or inside the human body in order to monitor vital body parameters and movements. One of the important applications of WBASN is patients’ healthcare monitoring of chronic diseases such as epileptic seizure. Normally, epileptic seizure data of the electroencephalograph (EEG) is captured and compressed in order to reduce its transmission time. However, at the same time, this contaminates the overall data and lowers classification accuracy. The current work also did not take into consideration that large size of collected EEG data. Consequently, EEG data is a bandwidth intensive. Hence, the main goal of this work is to design a unified compression and classification framework for delivery of EEG data in order to address its large size issue. EEG data is compressed in order to reduce its transmission time. However, at the same time, noise at the receiver side contaminates the overall data and lowers classification accuracy. Another goal is to reconstruct the compressed data and then recognize it. Therefore, a Noise Signal Combination (NSC) technique is proposed for the compression of the transmitted EEG data and enhancement of its classification accuracy at the receiving side in the presence of noise and incomplete data. The proposed framework combines compressive sensing and discrete cosine transform (DCT) in order to reduce the size of transmission data. Moreover, Gaussian noise model of the transmission channel is practically implemented to the framework. At the receiving side, the proposed NSC is designed based on weighted voting using four classification techniques. The accuracy of these techniques namely Artificial Neural Network, Naïve Bayes, k-Nearest Neighbour, and Support Victor Machine classifiers is fed to the proposed NSC. The experimental results showed that the proposed technique exceeds the conventional techniques by achieving the highest accuracy for noiseless and noisy data. Furthermore, the framework performs a significant role in reducing the size of data and classifying both noisy and noiseless data. The key contributions are the unified framework and proposed NSC, which improved accuracy of the noiseless and noisy EGG large data. The results have demonstrated the effectiveness of the proposed framework and provided several credible benefits including simplicity, and accuracy enhancement. Finally, the research improves clinical information about patients who not only suffer from epilepsy, but also neurological disorders, mental or physiological problems

    EEG-based multi-modal emotion recognition using bag of deep features: An optimal feature selection approach

    Get PDF
    Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition. - 2019 by the authors. Licensee MDPI, Basel, Switzerland.Funding: This research was funded by Higher Education Commission (HEC): Tdf/67/2017.Scopu

    Transparent authentication: Utilising heart rate for user authentication

    Get PDF
    There has been exponential growth in the use of wearable technologies in the last decade with smart watches having a large share of the market. Smart watches were primarily used for health and fitness purposes but recent years have seen a rise in their deployment in other areas. Recent smart watches are fitted with sensors with enhanced functionality and capabilities. For example, some function as standalone device with the ability to create activity logs and transmit data to a secondary device. The capability has contributed to their increased usage in recent years with researchers focusing on their potential. This paper explores the ability to extract physiological data from smart watch technology to achieve user authentication. The approach is suitable not only because of the capacity for data capture but also easy connectivity with other devices - principally the Smartphone. For the purpose of this study, heart rate data is captured and extracted from 30 subjects continually over an hour. While security is the ultimate goal, usability should also be key consideration. Most bioelectrical signals like heart rate are non-stationary time-dependent signals therefore Discrete Wavelet Transform (DWT) is employed. DWT decomposes the bioelectrical signal into n level sub-bands of detail coefficients and approximation coefficients. Biorthogonal Wavelet (bior 4.4) is applied to extract features from the four levels of detail coefficents. Ten statistical features are extracted from each level of the coffecient sub-band. Classification of each sub-band levels are done using a Feedforward neural Network (FF-NN). The 1 st , 2 nd , 3 rd and 4 th levels had an Equal Error Rate (EER) of 17.20%, 18.17%, 20.93% and 21.83% respectively. To improve the EER, fusion of the four level sub-band is applied at the feature level. The proposed fusion showed an improved result over the initial result with an EER of 11.25% As a one-off authentication decision, an 11% EER is not ideal, its use on a continuous basis makes this more than feasible in practice
    • …
    corecore