4,016 research outputs found

    An improved EEG pattern classification system based on dimensionality reduction and classifier fusion

    Full text link
    University of Technology, Sydney. Faculty of Engineering and Information Technology.Analysis of brain electrical activities (Electroencephalography, EEG) presents a rich source of information that helps in the advancement of affordable and effective biomedical applications such as psychotropic drug research, sleep studies, seizure detection and brain computer interface (BCI). Interpretation and understanding of EEG signal will provide clinicians and physicians with useful information for disease diagnosis and monitoring biological activities. It will also help in creating a new way of communication through brain waves. This thesis aims to investigate new algorithms for improving pattern recognition systems in two main EEG-based applications. The first application represents a simple Brain Computer Interface (BCI) based on imagined motor tasks, whilst the second one represents an automatic sleep scoring system in intensive care unit. BCI system in general aims to create a lion-muscular link between brain and external devices, thus providing a new control scheme that can most benefit the extremely immobilised persons. This link is created by utilizing pattern recognition approach to interpret EEG into device commands. The commands can then be used to control wheelchairs, computers or any other equipment. The second application relates to creating an automatic scoring system through interpreting certain properties of several biomedical signals. Traditionally, sleep specialists record and analyse brain signal using electroencephalogram (EEG), muscle tone (EMG), eye movement (EOG), and other biomedical signals to detect five sleep stages: Rapid Eye Movement (REM), stage 1,... to stage 4. Acquired signals are then scored based on 30 seconds intervals that require manually inspecting one segment at a time for certain properties to interpret sleep stages. The process is time consuming and demands competence. It is thought that an automatic scoring system mimicking sleep expert rules will speed up the process and reduce the cost. Practicality of any EEG-based system depends upon accuracy and speed. The more accurate and faster classification systems are, the better will be the chance to integrate them in wider range of applications. Thus, the performance of the previous systems is further enhanced using improved feature selection, projection and classification algorithms. As processing EEG signals requires dealing with multi-dimensional data, there is a need to minimize the dimensionality in order to achieve acceptable performance with less computational cost. The first possible candidate for dimensionality reduction is employed using channel feature selection approach. Four novel feature selection methods are developed utilizing genetic algorithms, ant colony, particle swarm and differential evolution optimization. The methods provide fast and accurate implementation in selecting the most informative features/channels that best represent mental tasks. Thus, computational burden of the classifier is kept as light as possible by removing irrelevant and highly redundant features. As an alternative to dimensionality reduction approach, a novel feature projection method is also introduced. The method maps the original feature set into a small informative subset of features that can best discriminate between the different class. Unlike most existing methods based on discriminant analysis, the proposed method considers fuzzy nature of input measurements in discovering the local manifold structure. It is able to find a projection that can maximize the margin between data points from different classes at each local area while considering the fuzzy nature. In classification phase, a number of improvements to traditional nearest neighbour classifier (kNN) are introduced. The improvements address kNN weighting scheme limitations. The traditional kNN does not take into account class distribution, importance of each feature, contribution of each neighbour, and the number of instances for each class. The proposed kNN variants are based on improved distance measure and weight optimization using differential evolution. Differential evolution optimizer is utilized to enhance kNN performance through optimizing the metric weights of features, neighbours and classes. Additionally, a Fuzzy kNN variant has also been developed to favour classification of certain classes. This variant may find use in medical examination. An alternative classifier fusion method is introduced that aims to create a set of diverse neural network ensemble. The diversity is enhanced by altering the target output of each network to create a certain amount of bias towards each class. This enables the construction of a set of neural network classifiers that complement each other

    Object Segmentation in Images using EEG Signals

    Get PDF
    This paper explores the potential of brain-computer interfaces in segmenting objects from images. Our approach is centered around designing an effective method for displaying the image parts to the users such that they generate measurable brain reactions. When an image region, specifically a block of pixels, is displayed we estimate the probability of the block containing the object of interest using a score based on EEG activity. After several such blocks are displayed, the resulting probability map is binarized and combined with the GrabCut algorithm to segment the image into object and background regions. This study shows that BCI and simple EEG analysis are useful in locating object boundaries in images.Comment: This is a preprint version prior to submission for peer-review of the paper accepted to the 22nd ACM International Conference on Multimedia (November 3-7, 2014, Orlando, Florida, USA) for the High Risk High Reward session. 10 page

    PULP-HD: Accelerating Brain-Inspired High-Dimensional Computing on a Parallel Ultra-Low Power Platform

    Full text link
    Computing with high-dimensional (HD) vectors, also referred to as hypervectors\textit{hypervectors}, is a brain-inspired alternative to computing with scalars. Key properties of HD computing include a well-defined set of arithmetic operations on hypervectors, generality, scalability, robustness, fast learning, and ubiquitous parallel operations. HD computing is about manipulating and comparing large patterns-binary hypervectors with 10,000 dimensions-making its efficient realization on minimalistic ultra-low-power platforms challenging. This paper describes HD computing's acceleration and its optimization of memory accesses and operations on a silicon prototype of the PULPv3 4-core platform (1.5mm2^2, 2mW), surpassing the state-of-the-art classification accuracy (on average 92.4%) with simultaneous 3.7×\times end-to-end speed-up and 2×\times energy saving compared to its single-core execution. We further explore the scalability of our accelerator by increasing the number of inputs and classification window on a new generation of the PULP architecture featuring bit-manipulation instruction extensions and larger number of 8 cores. These together enable a near ideal speed-up of 18.4×\times compared to the single-core PULPv3

    Efficient emotion recognition using hyperdimensional computing with combinatorial channel encoding and cellular automata

    Full text link
    In this paper, a hardware-optimized approach to emotion recognition based on the efficient brain-inspired hyperdimensional computing (HDC) paradigm is proposed. Emotion recognition provides valuable information for human-computer interactions, however the large number of input channels (>200) and modalities (>3) involved in emotion recognition are significantly expensive from a memory perspective. To address this, methods for memory reduction and optimization are proposed, including a novel approach that takes advantage of the combinatorial nature of the encoding process, and an elementary cellular automaton. HDC with early sensor fusion is implemented alongside the proposed techniques achieving two-class multi-modal classification accuracies of >76% for valence and >73% for arousal on the multi-modal AMIGOS and DEAP datasets, almost always better than state of the art. The required vector storage is seamlessly reduced by 98% and the frequency of vector requests by at least 1/5. The results demonstrate the potential of efficient hyperdimensional computing for low-power, multi-channeled emotion recognition tasks

    Improving Emotion Recognition Systems by Exploiting the Spatial Information of EEG Sensors

    Get PDF
    Electroencephalography (EEG)-based emotion recognition is gaining increasing importance due to its potential applications in various scientific fields, ranging from psychophysiology to neuromarketing. A number of approaches have been proposed that use machine learning (ML) technology to achieve high recognition performance, which relies on engineering features from brain activity dynamics. Since ML performance can be improved by utilizing 2D feature representation that exploits the spatial relationships among the features, here we propose a novel input representation that involves re-arranging EEG features as an image that reflects the top view of the subject’s scalp. This approach enables emotion recognition through image-based ML methods such as pre-trained deep neural networks or "trained-from-scratch" convolutional neural networks. We have employed both of these techniques in our study to demonstrate the effectiveness of our proposed input representation. We also compare the recognition performance of these methods against state-of-the-art tabular data analysis approaches, which do not utilize the spatial relationships between the sensors. We test our proposed approach using two publicly available benchmark datasets for EEG-based emotion recognition tasks, namely DEAP and MAHNOB-HCI. Our results show that the "trained-from-scratch" convolutional neural network outperforms the best approaches in the literature, achieving 97.8% and 98.3% accuracy in valence and arousal classification on MAHNOB-HCI, and 91% and 90.4% on DEAP, respectively

    Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks

    Full text link
    One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to inter- and intra-subject differences, as well as to inherent noise associated with such data. Herein, we propose a novel approach for learning such representations from multi-channel EEG time-series, and demonstrate its advantages in the context of mental load classification task. First, we transform EEG activities into a sequence of topology-preserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information. Next, we train a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images. The proposed approach is designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension. Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field.Comment: To be published as a conference paper at ICLR 201

    Integrated Machine Learning Approaches to Improve Classification performance and Feature Extraction Process for EEG Dataset

    Get PDF
    Epileptic seizure or epilepsy is a chronic neurological disorder that occurs due to brain neurons\u27 abnormal activities and has affected approximately 50 million people worldwide. Epilepsy can affect patients’ health and lead to life-threatening emergencies. Early detection of epilepsy is highly effective in avoiding seizures by intervening in treatment. The electroencephalogram (EEG) signal, which contains valuable information of electrical activity in the brain, is a standard neuroimaging tool used by clinicians to monitor and diagnose epilepsy. Visually inspecting the EEG signal is an expensive, tedious, and error-prone practice. Moreover, the result varies with different neurophysiologists for an identical reading. Thus, automatically classifying epilepsy into different epileptic states with a high accuracy rate is an urgent requirement and has long been investigated. This PhD thesis contributes to the epileptic seizure detection problem using Machine Learning (ML) techniques. Machine learning algorithms have been implemented to automatically classifying epilepsy from EEG data. Imbalance class distribution problems and effective feature extraction from the EEG signals are the two major concerns towards effectively and efficiently applying machine learning algorithms for epilepsy classification. The algorithms exhibit biased results towards the majority class when classes are imbalanced, while effective feature extraction can improve classification performance. In this thesis, we presented three different novel frameworks to effectively classify epileptic states while addressing the above issues. Firstly, a deep neural network-based framework exploring different sampling techniques was proposed where both traditional and state-of-the-art sampling techniques were experimented with and evaluated for their capability of improving the imbalance ratio and classification performance. Secondly, a novel integrated machine learning-based framework was proposed to effectively learn from EEG imbalanced data leveraging the Principal Component Analysis method to extract high- and low-variant principal components, which are empirically customized for the imbalanced data classification. This study showed that principal components associated with low variances can capture implicit patterns of the minority class of a dataset. Next, we proposed a novel framework to effectively classify epilepsy leveraging summary statistics analysis of window-based features of EEG signals. The framework first denoised the signals using power spectrum density analysis and replaced outliers with k-NN imputer. Next, window level features were extracted from statistical, temporal, and spectral domains. Basic summary statistics are then computed from the extracted features to feed into different machine learning classifiers. An optimal set of features are selected leveraging variance thresholding and dropping correlated features before feeding the features for classification. Finally, we applied traditional machine learning classifiers such as Support Vector Machine, Decision Tree, Random Forest, and k-Nearest Neighbors along with Deep Neural Networks to classify epilepsy. We experimented the frameworks with a benchmark dataset through rigorous experimental settings and displayed the effectiveness of the proposed frameworks in terms of accuracy, precision, recall, and F-beta score
    corecore