14 research outputs found

    Epilepsy seizure prediction on EEG using common spatial pattern and convolutional neural network

    Get PDF
    Epilepsy seizure prediction paves the way of timely warning for patients to take more active and effective intervention measures. Compared to seizure detection that only identifies the inter-ictal state and the ictal state, far fewer researches have been conducted on seizure prediction because the high similarity makes it challenging to distinguish between the pre-ictal state and the inter-ictal state. In this paper, a novel solution on seizure prediction is proposed using common spatial pattern (CSP) and convolutional neural network (CNN). Firstly, artificial pre-ictal EEG signals based on the original ones are generated by combining the segmented pre-ictal signals to solve the trial imbalance problem between the two states. Secondly, a feature extractor employing wavelet packet decomposition and CSP is designed to extract the distinguishing features in both the time domain and the frequency domain. It can improve overall accuracy while reducing the training time. Finally, a shallow CNN is applied to discriminate between the pre-ictal state and the inter-ictal state. Our proposed solution is evaluated on 23 patient's data from Boston Children's Hospital-MIT scalp EEG dataset by employing a leave-one-out cross-validation, and it achieves a sensitivity of 92.2% and false prediction rate of 0.12/h. Experimental result demonstrates that the proposed approach outperforms most state-of-the-art methods

    Human-in-the-Loop Design with Machine Learning

    Get PDF
    Deep learning methods have been applied to randomly generate images, such as in fashion, furniture design. To date, consideration of human aspects which play a vital role in a design process has not been given significant attention in deep learning approaches. In this paper, results are reported from a human- in-the-loop design method where brain EEG signals are used to capture preferable design features. In the framework developed, an encoder extracting EEG features from raw signals recorded from subjects when viewing images from ImageNet are learned. Secondly, a GAN model is trained conditioned on the encoded EEG features to generate design images. Thirdly, the trained model is used to generate design images from a person's EEG measured brain activity in the cognitive process of thinking about a design. To verify the proposed method, a case study is presented following the proposed approach. The results indicate that the method can generate preferred designs styles guided by the preference related brain signals. In addition, this method could also help improve communication between designers and clients where clients might not be able to express design requests clearly

    Neurocognition-inspired Design with Machine Learning

    Get PDF
    Generating design via machine learning has been an on-going challenge in computer-aided design. Recently, deep learning methods have been applied to randomly generate images in fashion, furniture and product design. However, such deep generative methods usually require a large number of training images and human aspects are not taken into account in the design process. In this work, we seek a way to involve human cognitive factors through brain activity indicated by electroencephalographic measurements (EEG) in the generative process. We propose a neuroscience-inspired design with machine learning method where EEG is used to capture preferred design features. Such signals are used as a condition in generative adversarial networks (GAN). Firstly, we employ a recurrent neural network (LSTM - Long Short-Term Memory) as an encoder to extract EEG features from raw EEG signals; this data is recorded from subjects viewing several categories of images from ImageNet. Secondly, we train a GAN model conditioned on the encoded EEG features to generate design images. Thirdly, we use the model to generate design images from the subject’s EEG measured brain activity

    Compact and interpretable convolutional neural network architecture for electroencephalogram based motor imagery decoding

    Get PDF
    Recently, due to the popularity of deep learning, the applicability of deep Neural Networks (DNN) algorithms such as the convolutional neural networks (CNN) has been explored in decoding electroencephalogram (EEG) for Brain-Computer Interface (BCI) applications. This allows decoding of the EEG signals end-to-end, eliminating the tedious process of manually tuning each process in the decoding pipeline. However, the current DNN architectures, consisting of multiple hidden layers and numerous parameters, are not developed for EEG decoding and classification tasks, making them underperform when decoding EEG signals. Apart from this, a DNN is typically treated as a black box and interpreting what the network learns in solving the classification task is difficult, hindering from performing neurophysiological validation of the network. This thesis proposes an improved and compact CNN architecture for motor imagery decoding based on the adaptation of SincNet, which was initially developed for speaker recognition task from the raw audio input. Such adaptation allows for a very compact end-to-end neural network with state-of-the-art (SOTA) performances and enables network interpretability for neurophysiological validation in terms of cortical rhythms and spatial analysis. In order to validate the performance of proposed algorithms, two datasets were used; the first is the publicly available BCI Competition IV dataset 2a, which is often used as a benchmark in validating motor imagery (MI) classification algorithms, and a primary data that was initially collected to study the difference between motor imagery and mental rotation task associated motor imagery (MI+MR) BCI. The latter was also used in this study to test the plausibility of the proposed algorithm in highlighting the differences in cortical rhythms. In both datasets, the proposed Sinc adapted CNN algorithms show competitive decoding performance in comparisons with SOTA CNN models, where up to 87% decoding accuracy was achieved in BCI Competition IV dataset 2a and up to 91% decoding accuracy when using the primary MI+MR data. Such decoding performance was achieved with the lowest number of trainable parameters (26.5% - 34.1% reduction in the number of parameters compared to its non-Sinc counterpart). In addition, it was shown that the proposed architecture performs a cleaner band-pass, highlighting the necessary frequency bands that focus on important cortical rhythms during task execution, thus allowing for the development of the proposed Spatial Filter Visualization algorithm. Such characteristic was crucial for the neurophysiological interpretation of the learned spatial features and was not previously established with the benchmarked SOTA methods
    corecore