2,767 research outputs found

    Multi-Person Brain Activity Recognition via Comprehensive EEG Signal Analysis

    Full text link
    An electroencephalography (EEG) based brain activity recognition is a fundamental field of study for a number of significant applications such as intention prediction, appliance control, and neurological disease diagnosis in smart home and smart healthcare domains. Existing techniques mostly focus on binary brain activity recognition for a single person, which limits their deployment in wider and complex practical scenarios. Therefore, multi-person and multi-class brain activity recognition has obtained popularity recently. Another challenge faced by brain activity recognition is the low recognition accuracy due to the massive noises and the low signal-to-noise ratio in EEG signals. Moreover, the feature engineering in EEG processing is time-consuming and highly re- lies on the expert experience. In this paper, we attempt to solve the above challenges by proposing an approach which has better EEG interpretation ability via raw Electroencephalography (EEG) signal analysis for multi-person and multi-class brain activity recognition. Specifically, we analyze inter-class and inter-person EEG signal characteristics, based on which to capture the discrepancy of inter-class EEG data. Then, we adopt an Autoencoder layer to automatically refine the raw EEG signals by eliminating various artifacts. We evaluate our approach on both a public and a local EEG datasets and conduct extensive experiments to explore the effect of several factors (such as normalization methods, training data size, and Autoencoder hidden neuron size) on the recognition results. The experimental results show that our approach achieves a high accuracy comparing to competitive state-of-the-art methods, indicating its potential in promoting future research on multi-person EEG recognition.Comment: 10 page

    Discriminative methods for classification of asynchronous imaginary motor tasks from EEG data

    Get PDF
    In this work, two methods based on statistical models that take into account the temporal changes in the electroencephalographic (EEG) signal are proposed for asynchronous brain-computer interfaces (BCI) based on imaginary motor tasks. Unlike the current approaches to asynchronous BCI systems that make use of windowed versions of the EEG data combined with static classifiers, the methods proposed here are based on discriminative models that allow sequential labeling of data. In particular, the two methods we propose for asynchronous BCI are based on conditional random fields (CRFs) and latent dynamic CRFs (LDCRFs), respectively. We describe how the asynchronous BCI problem can be posed as a classification problem based on CRFs or LDCRFs, by defining appropriate random variables and their relationships. CRF allows modeling the extrinsic dynamics of data, making it possible to model the transitions between classes, which in this context correspond to distinct tasks in an asynchronous BCI system. On the other hand, LDCRF goes beyond this approach by incorporating latent variables that permit modeling the intrinsic structure for each class and at the same time allows modeling extrinsic dynamics. We apply our proposed methods on the publicly available BCI competition III dataset V as well as a data set recorded in our laboratory. Results obtained are compared to the top algorithm in the BCI competition as well as to methods based on hierarchical hidden Markov models (HHMMs), hierarchical hidden CRF (HHCRF), neural networks based on particle swarm optimization (IPSONN) and to a recently proposed approach based on neural networks and fuzzy theory, the S-dFasArt. Our experimental analysis demonstrates the improvements provided by our proposed methods in terms of classification accuracy

    Wavelet Transform Based Classification of Invasive Brain Computer Interface Data

    Get PDF
    The input signals of brain computer interfaces may be either electroencephalogram recorded from scalp or electrocorticogram recorded with subdural electrodes. It is very important that the classifiers have the ability for discriminating signals which are recorded in different sessions to make brain computer interfaces practical in use. This paper proposes a method for classifying motor imagery electrocorticogram signals recorded in different sessions. Extracted feature vectors based on wavelet transform were classified by using k-nearest neighbor, support vector machine and linear discriminant analysis algorithms. The proposed method was successfully applied to Data Set I of BCI competition 2005, and achieved a classification accuracy of 94 % on the test data. The performance of the proposed method was confirmed in terms of sensitivity, specificity and Kappa and compared with that of other studies used the same data set. This paper is an extended version of our work that won the Best Paper Award at the 33rd International Conference on Telecommunications and Signal Processing

    EEG Signal Processing in Motor Imagery Brain Computer Interfaces with Improved Covariance Estimators

    Get PDF
    Desde hace unos años hasta la actualidad, el desarrollo en el campo de los interfaces cerebro ordenador ha ido aumentando. Este aumento viene motivado por una serie de factores distintos. A medida que aumenta el conocimiento acerca del cerebro humano y como funciona (del que aún se conoce relativamente poco), van surgiendo nuevos avances en los sistemas BCI que, a su vez, sirven de motivación para que se investigue más acerca de este órgano. Además, los sistemas BCI abren una puerta para que cualquier persona pueda interactuar con su entorno independientemente de la discapacidad física que pueda tener, simplemente haciendo uso de sus pensamientos. Recientemente, la industria tecnológica ha comenzado a mostrar su interés por estos sistemas, motivados tanto por los avances con respecto a lo que conocemos del cerebro y como funciona, como por el uso constante que hacemos de la tecnología en la actuali- dad, ya sea a través de nuestros smartphones, tablets u ordenadores, entre otros muchos dispositivos. Esto motiva que compañías como Facebook inviertan en el desarrollo de sistemas BCI para que tanto personas sin discapacidad como aquellas que, si las tienen, puedan comunicarse con los móviles usando solo el cerebro. El trabajo desarrollado en esta tesis se centra en los sistemas BCI basados en movimien- tos imaginarios. Esto significa que el usuario piensa en movimientos motores que son interpretados por un ordenador como comandos. Las señales cerebrales necesarias para traducir posteriormente a comandos se obtienen mediante un equipo de EEG que se coloca sobre el cuero cabelludo y que mide la actividad electromagnética producida por el cere- bro. Trabajar con estas señales resulta complejo ya que son no estacionarias y, además, suelen estar muy contaminadas por ruido o artefactos. Hemos abordado esta temática desde el punto de vista del procesado estadístico de la señal y mediante algoritmos de aprendizaje máquina. Para ello se ha descompuesto el sistema BCI en tres bloques: preprocesado de la señal, extracción de características y clasificación. Tras revisar el estado del arte de estos bloques, se ha resumido y adjun- tado un conjunto de publicaciones que hemos realizado durante los últimos años, y en las cuales podemos encontrar las diferentes aportaciones que, desde nuestro punto de vista, mejoran cada uno de los bloques anteriormente mencionados. De manera muy resumida, para el bloque de preprocesado proponemos un método mediante el cual conseguimos nor- malizar las fuentes de las señales de EEG. Al igualar las fuentes efectivas conseguimos mejorar la estima de las matrices de covarianza. Con respecto al bloque de extracción de características, hemos conseguido extender el algoritmo CSP a casos no supervisados. Por último, en el bloque de clasificación también hemos conseguido realizar una sepa- ración de clases de manera no supervisada y, por otro lado, hemos observado una mejora cuando se regulariza el algoritmo LDA mediante un método específico para Gaussianas.The research and development in the field of Brain Computer Interfaces (BCI) has been growing during the last years, motivated by several factors. As the knowledge about how the human brain is and works (of which we still know very little) grows, new advances in BCI systems are emerging that, in turn, serve as motivation to do more re- search about this organ. In addition, BCI systems open a door for anyone to interact with their environment regardless of the physical disabilities they may have, by simply using their thoughts. Recently, the technology industry has begun to show its interest in these systems, mo- tivated both by the advances about what we know of the brain and how it works, and by the constant use we make of technology nowadays, whether it is by using our smart- phones, tablets or computers, among many other devices. This motivates companies like Facebook to invest in the development of BCI systems so that people (with or without disabilities) can communicate with their devices using only their brain. The work developed in this thesis focuses on BCI systems based on motor imagery movements. This means that the user thinks of certain motor movements that are in- terpreted by a computer as commands. The brain signals that we need to translate to commands are obtained by an EEG device that is placed on the scalp and measures the electromagnetic activity produced by the brain. Working with these signals is complex since they are non-stationary and, in addition, they are usually heavily contaminated by noise or artifacts. We have approached this subject from the point of view of statistical signal processing and through machine learning algorithms. For this, the BCI system has been split into three blocks: preprocessing, feature extraction and classification. After reviewing the state of the art of these blocks, a set of publications that we have made in recent years has been summarized and attached. In these publications we can find the different contribu- tions that, from our point of view, improve each one of the blocks previously mentioned. As a brief summary, for the preprocessing block we propose a method that lets us nor- malize the sources of the EEG signals. By equalizing the effective sources, we are able to improve the estimation of the covariance matrices. For the feature extraction block, we have managed to extend the CSP algorithm for unsupervised cases. Finally, in the classification block we have also managed to perform a separation of classes in an blind way and we have also observed an improvement when the LDA algorithm is regularized by a specific method for Gaussian distributions

    Latent Alignment with Deep Set EEG Decoders

    Full text link
    The variability in EEG signals between different individuals poses a significant challenge when implementing brain-computer interfaces (BCI). Commonly proposed solutions to this problem include deep learning models, due to their increased capacity and generalization, as well as explicit domain adaptation techniques. Here, we introduce the Latent Alignment method that won the Benchmarks for EEG Transfer Learning (BEETL) competition and present its formulation as a deep set applied on the set of trials from a given subject. Its performance is compared to recent statistical domain adaptation techniques under various conditions. The experimental paradigms include motor imagery (MI), oddball event-related potentials (ERP) and sleep stage classification, where different well-established deep learning models are applied on each task. Our experimental results show that performing statistical distribution alignment at later stages in a deep learning model is beneficial to the classification accuracy, yielding the highest performance for our proposed method. We further investigate practical considerations that arise in the context of using deep learning and statistical alignment for EEG decoding. In this regard, we study class-discriminative artifacts that can spuriously improve results for deep learning models, as well as the impact of class-imbalance on alignment. We delineate a trade-off relationship between increased classification accuracy when alignment is performed at later modeling stages, and susceptibility to class-imbalance in the set of trials that the statistics are computed on

    2021 BEETL competition: advancing transfer learning for subject independence & heterogenous EEG data sets

    Get PDF
    Transfer learning and meta-learning offer some of the most promising avenues to unlock the scalability of healthcare and consumer technologies driven by biosignal data. This is because regular machine learning methods cannot generalise well across human subjects and handle learning from different, heterogeneously collected data sets, thus limiting the scale of training data available. On the other hand, the many developments in transfer- and meta-learning fields would benefit significantly from a real-world benchmark with immediate practical application. Therefore, we pick electroencephalography (EEG) as an exemplar for all the things that make biosignal data analysis a hard problem. We design two transfer learning challenges around a. clinical diagnostics and b. neurotechnology. These two challenges are designed to probe algorithmic performance with all the challenges of biosignal data, such as low signal-to-noise ratios, major variability among subjects, differences in the data recording sessions and techniques, and even between the specific BCI tasks recorded in the dataset. Task 1 is centred on the field of medical diagnostics, addressing automatic sleep stage annotation across subjects. Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets. The successful 2021 BEETL competition with its over 30 competing teams and its 3 winning entries brought attention to the potential of deep transfer learning and combinations of set theory and conventional machine learning techniques to overcome the challenges. The results set a new state-of-the-art for the real-world BEETL benchmarks

    Sinc-based convolutional neural networks for EEG-BCI-based motor imagery classification

    Full text link
    Brain-Computer Interfaces (BCI) based on motor imagery translate mental motor images recognized from the electroencephalogram (EEG) to control commands. EEG patterns of different imagination tasks, e.g. hand and foot movements, are effectively classified with machine learning techniques using band power features. Recently, also Convolutional Neural Networks (CNNs) that learn both effective features and classifiers simultaneously from raw EEG data have been applied. However, CNNs have two major drawbacks: (i) they have a very large number of parameters, which thus requires a very large number of training examples; and (ii) they are not designed to explicitly learn features in the frequency domain. To overcome these limitations, in this work we introduce Sinc-EEGNet, a lightweight CNN architecture that combines learnable band-pass and depthwise convolutional filters. Experimental results obtained on the publicly available BCI Competition IV Dataset 2a show that our approach outperforms reference methods in terms of classification accuracy
    corecore