567 research outputs found

    Data-driven multivariate and multiscale methods for brain computer interface

    Get PDF
    This thesis focuses on the development of data-driven multivariate and multiscale methods for brain computer interface (BCI) systems. The electroencephalogram (EEG), the most convenient means to measure neurophysiological activity due to its noninvasive nature, is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its multichannel recording nature require a new set of data-driven multivariate techniques to estimate more accurately features for enhanced BCI operation. Also, a long term goal is to enable an alternative EEG recording strategy for achieving long-term and portable monitoring. Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary EEG signal into a set of components which are highly localised in time and frequency. It is shown that the complex and multivariate extensions of EMD, which can exploit common oscillatory modes within multivariate (multichannel) data, can be used to accurately estimate and compare the amplitude and phase information among multiple sources, a key for the feature extraction of BCI system. A complex extension of local mean decomposition is also introduced and its operation is illustrated on two channel neuronal spike streams. Common spatial pattern (CSP), a standard feature extraction technique for BCI application, is also extended to complex domain using the augmented complex statistics. Depending on the circularity/noncircularity of a complex signal, one of the complex CSP algorithms can be chosen to produce the best classification performance between two different EEG classes. Using these complex and multivariate algorithms, two cognitive brain studies are investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user attention to a sound source among a mixture of sound stimuli, which is aimed at improving the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments elicited by taste and taste recall are examined to determine the pleasure and displeasure of a food for the implementation of affective computing. The separation between two emotional responses is examined using real and complex-valued common spatial pattern methods. Finally, we introduce a novel approach to brain monitoring based on EEG recordings from within the ear canal, embedded on a custom made hearing aid earplug. The new platform promises the possibility of both short- and long-term continuous use for standard brain monitoring and interfacing applications

    Learning deep physiological models of affect

    Get PDF
    Feature extraction and feature selection are crucial phases in the process of affective modeling. Both, however, incorporate substantial limitations that hinder the development of reliable and accurate models of affect. For the purpose of modeling affect manifested through physiology, this paper builds on recent advances in machine learning with deep learning (DL) approaches. The efficiency of DL algorithms that train artificial neural network models is tested and compared against standard feature extraction and selection approaches followed in the literature. Results on a game data corpus — containing players’ physiological signals (i.e. skin conductance and blood volume pulse) and subjective self-reports of affect — reveal that DL outperforms manual ad-hoc feature extraction as it yields significantly more accurate affective models. Moreover, it appears that DL meets and even outperforms affective models that are boosted by automatic feature selection, for several of the scenarios examined. As the DL method is generic and applicable to any affective modeling task, the key findings of the paper suggest that ad-hoc feature extraction and selection — to a lesser degree — could be bypassed.The authors would like to thank Tobias Mahlmann for his work on the development and administration of the cluster used to run the experiments. Special thanks for proofreading goes to Yana Knight. Thanks also go to the Theano development team, to all participants in our experiments, and to Ubisoft, NSERC and Canada Research Chairs for funding. This work is funded, in part, by the ILearnRW (project no: 318803) and the C2Learn (project no. 318480) FP7 ICT EU projects.peer-reviewe

    An enhanced stress indices in signal processing based on advanced mmatthew correlation coefficient (MCCA) and multimodal function using EEG signal

    Get PDF
    Stress is a response to various environmental, psychological, and social factors, resulting in strain and pressure on individuals. Categorizing stress levels is a common practise, often using low, medium, and high stress categories. However, the limitation of only three stress levels is a significant drawback of the existing approach. This study aims to address this limitation and proposes an improved method for EEG feature extraction and stress level categorization. The main contribution of this work lies in the enhanced stress level categorization, which expands from three to six levels using the newly established fractional scale based on the quantities' scale influenced by MCCA and multimodal equation performance. The concept of standard deviation (STD) helps in categorizing stress levels by dividing the scale of quantities, leading to an improvement in the process. The lack of performance in the Matthew Correlation Coefficient (MCC) equation is observed in relation to accuracy values. Also, multimodal is rarely discussed in terms of parameters. Therefore, the MCCA and multimodal function provide the advantage of significantly enhancing accuracy as a part of the study's contribution. This study introduces the concept of an Advanced Matthew Correlation Coefficient (MCCA) and applies the six-sigma framework to enhance accuracy in stress level categorization. The research focuses on expanding the stress levels from three to six, utilizing a new scale of fractional stress levels influenced by MCCA and multimodal equation performance. Furthermore, the study applies signal pre-processing techniques to filter and segregate the EEG signal into Delta, Theta, Alpha, and Beta frequency bands. Subsequently, feature extraction is conducted, resulting in twenty-one statistical and non-statistical features. These features are employed in both the MCCA and multimodal function analysis. The study employs the Support Vector Machine (SVM), Random Forest (RF), and k-Nearest Neighbour (k-NN) classifiers for stress level validation. After conducting experiments and performance evaluations, RF demonstrates the highest average accuracy of 85%–10% in 10-Fold and K-Fold techniques, outperforming SVM and k-NN. In conclusion, this study presents an improved approach to stress level categorization and EEG feature extraction. The proposed Advanced Matthew Correlation Coefficient (MCCA) and six-sigma framework contribute to achieving higher accuracy, surpassing the limitations of the existing three-level categorization. The results indicate the superiority of the Random Forest classifier over SVM and k-NN. This research has implications for various applications and fields, providing a more effective equation to accurately categorize stress levels with a potential accuracy exceeding 95%

    EEG-based emotion recognition using tunable Q wavelet transform and rotation forest ensemble classifier

    Get PDF
    Emotion recognition by artificial intelligence (AI) is a challenging task. A wide variety of research has been done, which demonstrated the utility of audio, imagery, and electroencephalography (EEG) data for automatic emotion recognition. This paper presents a new automated emotion recognition framework, which utilizes electroencephalography (EEG) signals. The proposed method is lightweight, and it consists of four major phases, which include: a reprocessing phase, a feature extraction phase, a feature dimension reduction phase, and a classification phase. A discrete wavelet transforms (DWT) based noise reduction method, which is hereby named multi scale principal component analysis (MSPCA), is utilized during the pre-processing phase, where a Symlets-4 filter is utilized for noise reduction. A tunable Q wavelet transform (TQWT) is utilized as feature extractor. Six different statistical methods are used for dimension reduction. In the classification step, rotation forest ensemble (RFE) classifier is utilized with different classification algorithms such as k-Nearest Neighbor (k-NN), support vector machine (SVM), artificial neural network (ANN), random forest (RF), and four different types of the decision tree (DT) algorithms. The proposed framework achieves over 93 % classification accuracy with RFE + SVM. The results clearly show that the proposed TQWT and RFE based emotion recognition framework is an effective approach for emotion recognition using EEG signals.</p
    • …
    corecore