881 research outputs found

    Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification

    Full text link
    Objective. The main goal of this work is to develop a model for multi-sensor signals such as MEG or EEG signals, that accounts for the inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI type experiments. Approach. The method involves linear mixed effects statistical model, wavelet transform and spatial filtering, and aims at the characterization of localized discriminant features in multi-sensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e. discriminant) and background noise, using a very simple Gaussian linear mixed model. Main results. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data, in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. Significance. The combination of linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves on earlier results on similar problems, and the three main ingredients all play an important role

    The Smartphone Brain Scanner: A Portable Real-Time Neuroimaging System

    Get PDF
    Combining low cost wireless EEG sensors with smartphones offers novel opportunities for mobile brain imaging in an everyday context. We present a framework for building multi-platform, portable EEG applications with real-time 3D source reconstruction. The system - Smartphone Brain Scanner - combines an off-the-shelf neuroheadset or EEG cap with a smartphone or tablet, and as such represents the first fully mobile system for real-time 3D EEG imaging. We discuss the benefits and challenges of a fully portable system, including technical limitations as well as real-time reconstruction of 3D images of brain activity. We present examples of the brain activity captured in a simple experiment involving imagined finger tapping, showing that the acquired signal in a relevant brain region is similar to that obtained with standard EEG lab equipment. Although the quality of the signal in a mobile solution using a off-the-shelf consumer neuroheadset is lower compared to that obtained using high density standard EEG equipment, we propose that mobile application development may offset the disadvantages and provide completely new opportunities for neuroimaging in natural settings

    SimBCI-A framework for studying BCI methods by simulated EEG

    Get PDF
    International audienceBrain-computer interface (BCI) methods are commonly studied using electroencephalogram (EEG) data recorded from human experiments. For understanding and developing BCI signal processing techniques, real data is costly to obtain and its composition is a priori unknown. The brain mechanisms generating the EEG are not directly observable and their states cannot be uniquely identified from the EEG. Subsequently, we do not have generative ground truth for real data. In this paper, we propose a novel convenience framework called simBCI to alleviate testing and studying BCI signal processing methods in simulated, controlled conditions. The framework can be used to generate artificial BCI data and to test classification pipelines with such data. Models and parameters on both data generation and the signal processing side can be iterated over to examine the interplay of different combinations. The framework provides the first time open source implementations of several models and methods. We invite researchers to insert more advanced models. The proposed system does not intend to replace human experiments. Instead, it can be used to discover hypotheses, study algorithms, educate about BCI, and debug signal processing pipelines of other BCI systems. The proposed framework is modular, extensible, and freely available as open source. 1 It currently requires MATLAB

    Towards the Applications of Algorithms for Inverse Solutions in EEG Brain-Computer Interfaces

    Get PDF
    Locating the sources of EEG signals (signal generators), i.e. indicating the places in the brain that the signals come from is the objective of the inverse problem in BCI applications using EEG. The two algorithms based on the methods used in the inverse problem: the linear least squares method and the LORETA1 method were compared. An analysis of the accuracy of locating the sources generating EEG signals on the basis of the two above mentioned methods was carried out with the use of the MATLAB programme. The findings made it possible to determine both the complexity of calculation involved in the methods under consideration and to compare the accuracy of the results obtained. Tests were done in which the inverse problem was solved on the basis of the data that were entered from the electrodes. Then potentials on electrodes were found by means of solving the forward problem once again Φ (Φ −Φ^).Moreover, tests were conducted on simulated data describing current density at selected places in the brain. In this case potentials on the electrodes were found by means of solving the forward problem. Subsequently the inverse problem was solved and potentials at selected places in the brain were specified J(J −J^). In the case of J(J −J^) only the relative error was examined, while the variance was studied in both cases. As a result of doing the tests, it was proved that relative errors were the same in the SVD and PINV methods, while in the LORETA method the error was similar. The variance computed for these methods was more differentiated for each of the cases, which made it possible to compare the algorithms in a better way. Differentiation of the variances under 0.2 shows that the algorithms that have been analyzed work properly. On the basis of knowing the results of the inverse problem, an attempt was made to make a selection of the best features of the EEG signal which differentiates the classes. In the present work tests were conducted to examine the differentiation of selected classes. Welch’s t-statistics was used to differentiate and order them. The results of the tests present the order for three classes of thought tasks, i.e. imagining moving one’s left hand, imagining moving one’s right hand, imagining generating words beginning with a randomly chosen letter. The present work is an introduction to a wider classification of features which are made with the use of inverse solutions

    A statistical approach to the inverse problem in magnetoencephalography

    Full text link
    Magnetoencephalography (MEG) is an imaging technique used to measure the magnetic field outside the human head produced by the electrical activity inside the brain. The MEG inverse problem, identifying the location of the electrical sources from the magnetic signal measurements, is ill-posed, that is, there are an infinite number of mathematically correct solutions. Common source localization methods assume the source does not vary with time and do not provide estimates of the variability of the fitted model. Here, we reformulate the MEG inverse problem by considering time-varying locations for the sources and their electrical moments and we model their time evolution using a state space model. Based on our predictive model, we investigate the inverse problem by finding the posterior source distribution given the multiple channels of observations at each time rather than fitting fixed source parameters. Our new model is more realistic than common models and allows us to estimate the variation of the strength, orientation and position. We propose two new Monte Carlo methods based on sequential importance sampling. Unlike the usual MCMC sampling scheme, our new methods work in this situation without needing to tune a high-dimensional transition kernel which has a very high cost. The dimensionality of the unknown parameters is extremely large and the size of the data is even larger. We use Parallel Virtual Machine (PVM) to speed up the computation.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS716 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    True zero-training brain-computer interfacing: an online study

    Get PDF
    Despite several approaches to realize subject-to-subject transfer of pre-trained classifiers, the full performance of a Brain-Computer Interface (BCI) for a novel user can only be reached by presenting the BCI system with data from the novel user. In typical state-of-the-art BCI systems with a supervised classifier, the labeled data is collected during a calibration recording, in which the user is asked to perform a specific task. Based on the known labels of this recording, the BCI's classifier can learn to decode the individual's brain signals. Unfortunately, this calibration recording consumes valuable time. Furthermore, it is unproductive with respect to the final BCI application, e.g. text entry. Therefore, the calibration period must be reduced to a minimum, which is especially important for patients with a limited concentration ability. The main contribution of this manuscript is an online study on unsupervised learning in an auditory event-related potential (ERP) paradigm. Our results demonstrate that the calibration recording can be bypassed by utilizing an unsupervised trained classifier, that is initialized randomly and updated during usage. Initially, the unsupervised classifier tends to make decoding mistakes, as the classifier might not have seen enough data to build a reliable model. Using a constant re-analysis of the previously spelled symbols, these initially misspelled symbols can be rectified posthoc when the classifier has learned to decode the signals. We compare the spelling performance of our unsupervised approach and of the unsupervised posthoc approach to the standard supervised calibration-based dogma for n = 10 healthy users. To assess the learning behavior of our approach, it is unsupervised trained from scratch three times per user. Even with the relatively low SNR of an auditory ERP paradigm, the results show that after a limited number of trials (30 trials), the unsupervised approach performs comparably to a classic supervised model

    Data-driven multivariate and multiscale methods for brain computer interface

    Get PDF
    This thesis focuses on the development of data-driven multivariate and multiscale methods for brain computer interface (BCI) systems. The electroencephalogram (EEG), the most convenient means to measure neurophysiological activity due to its noninvasive nature, is mainly considered. The nonlinearity and nonstationarity inherent in EEG and its multichannel recording nature require a new set of data-driven multivariate techniques to estimate more accurately features for enhanced BCI operation. Also, a long term goal is to enable an alternative EEG recording strategy for achieving long-term and portable monitoring. Empirical mode decomposition (EMD) and local mean decomposition (LMD), fully data-driven adaptive tools, are considered to decompose the nonlinear and nonstationary EEG signal into a set of components which are highly localised in time and frequency. It is shown that the complex and multivariate extensions of EMD, which can exploit common oscillatory modes within multivariate (multichannel) data, can be used to accurately estimate and compare the amplitude and phase information among multiple sources, a key for the feature extraction of BCI system. A complex extension of local mean decomposition is also introduced and its operation is illustrated on two channel neuronal spike streams. Common spatial pattern (CSP), a standard feature extraction technique for BCI application, is also extended to complex domain using the augmented complex statistics. Depending on the circularity/noncircularity of a complex signal, one of the complex CSP algorithms can be chosen to produce the best classification performance between two different EEG classes. Using these complex and multivariate algorithms, two cognitive brain studies are investigated for more natural and intuitive design of advanced BCI systems. Firstly, a Yarbus-style auditory selective attention experiment is introduced to measure the user attention to a sound source among a mixture of sound stimuli, which is aimed at improving the usefulness of hearing instruments such as hearing aid. Secondly, emotion experiments elicited by taste and taste recall are examined to determine the pleasure and displeasure of a food for the implementation of affective computing. The separation between two emotional responses is examined using real and complex-valued common spatial pattern methods. Finally, we introduce a novel approach to brain monitoring based on EEG recordings from within the ear canal, embedded on a custom made hearing aid earplug. The new platform promises the possibility of both short- and long-term continuous use for standard brain monitoring and interfacing applications

    EEG based volitional interaction with a robot to dynamically replan trajectories

    Get PDF
    Everyday robots are more involved in our daily basis and they are expected to be part of the domestic environment eventually. Assertive robots deal with this scenario providing help to people with certain disabilities, a task that requires an intuitive communication between human and robot. One of the control methods that have become popular is the Brain Computer Interfaces (BCI), using electroencephalography (EEG) signals to read the user’s intention. Commonly applied to let the user choose among several options, without further interaction once the robot starts acting. This project explain a method to interpret the EEG signal online and use it to manipulate online the movement of a robot arm. The signal that is received comes from Motion Imagery to allow the user to communicate constantly his intention. Using Dynamic Movement Primitives (DMP) and Virtual Force Systems the stored trajectories of the robot can be modified online trying to adapt to the user’s will. With this elements the trajectories are applied in a real robot arm, chequing online if all the requested goals are feasible positions for the robot. This method intend to make more natural the collaboration with a robot in domestic tasks, where slight modifications of an action can lead into a more satisfactory interaction

    A Survey on the Project in title

    Full text link
    In this paper we present a survey of work that has been done in the project ldquo;Unsupervised Adaptive P300 BCI in the framework of chaotic theory and stochastic theoryrdquo;we summarised the following papers, (Mohammed J Alhaddad amp; 2011), (Mohammed J. Alhaddad amp; Kamel M, 2012), (Mohammed J Alhaddad, Kamel, amp; Al-Otaibi, 2013), (Mohammed J Alhaddad, Kamel, amp; Bakheet, 2013), (Mohammed J Alhaddad, Kamel, amp; Al-Otaibi, 2014), (Mohammed J Alhaddad, Kamel, amp; Bakheet, 2014), (Mohammed J Alhaddad, Kamel, amp; Kadah, 2014), (Mohammed J Alhaddad, Kamel, Makary, Hargas, amp; Kadah, 2014), (Mohammed J Alhaddad, Mohammed, Kamel, amp; Hagras, 2015).We developed a new pre-processing method for denoising P300-based brain-computer interface data that allows better performance with lower number of channels and blocks. The new denoising technique is based on a modified version of the spectral subtraction denoising and works on each temporal signal channel independently thus offering seamless integration with existing pre-processing and allowing low channel counts to be used. We also developed a novel approach for brain-computer interface data that requires no prior training. The proposed approach is based on interval type-2 fuzzy logic based classifier which is able to handle the usersrsquo; uncertainties to produce better prediction accuracies than other competing classifiers such as BLDA or RFLDA. In addition, the generated type-2 fuzzy classifier is learnt from data via genetic algorithms to produce a small number of rules with a rule length of only one antecedent to maximize the transparency and interpretability for the normal clinician. We also employ a feature selection system based on an ensemble neural networks recursive feature selection which is able to find the effective time instances within the effective sensors in relation to given P300 event. The basic principle of this new class of techniques is that the trial with true activation signal within each block has to be different from the rest of the trials within that block. Hence, a measure that is sensitive to this dissimilarity can be used to make a decision based on a single block without any prior training. The new methods were verified using various experiments which were performed on standard data sets and using real-data sets obtained from real subjects experiments performed in the BCI lab in King Abdulaziz University. The results were compared to the classification results of the same data using previous methods. Enhanced performance in different experiments as quantitatively assessed using classification block accuracy as well as bit rate estimates was confirmed. It will be shown that the produced type-2 fuzzy logic based classifier will learn simple rules which are easy to understand explaining the events in question. In addition, the produced type-2 fuzzy logic classifier will be able to give better accuracies when compared to BLDA or RFLDA on various human subjects on the standard and real-world data sets
    • …
    corecore