522 research outputs found

    Shallow convolutional network excel for classifying motor imagery EEG in BCI applications

    Get PDF
    Many studies applying Brain-Computer Interfaces (BCIs) based on Motor Imagery (MI) tasks for rehabilitation have demonstrated the important role of detecting the Event-Related Desynchronization (ERD) to recognize the user’s motor intention. Nowadays, the development of MI-based BCI approaches without or with very few calibration stages session-by-session for different days or weeks is still an open and emergent scope. In this work, a new scheme is proposed by applying Convolutional Neural Networks (CNN) for MI classification, using an end-to-end Shallow architecture that contains two convolutional layers for temporal and spatial feature extraction. We hypothesize that a BCI designed for capturing event-related desynchronization/synchronization (ERD/ERS) at the CNN input, with an adequate network design, may enhance the MI classification with fewer calibration stages. The proposed system using the same architecture was tested on three public datasets through multiple experiments, including both subject-specific and non-subject-specific training. Comparable and also superior results with respect to the state-of-the-art were obtained. On subjects whose EEG data were never used in the training process, our scheme also achieved promising results with respect to existing non-subject-specific BCIs, which shows greater progress in facilitating clinical applications

    Online multiclass EEG feature extraction and recognition using modified convolutional neural network method

    Get PDF
    Many techniques have been introduced to improve both brain-computer interface (BCI) steps: feature extraction and classification. One of the emerging trends in this field is the implementation of deep learning algorithms. There is a limited number of studies that investigated the application of deep learning techniques in electroencephalography (EEG) feature extraction and classification. This work is intended to apply deep learning for both stages: feature extraction and classification. This paper proposes a modified convolutional neural network (CNN) feature extractorclassifier algorithm to recognize four different EEG motor imagery (MI). In addition, a four-class linear discriminant analysis (LDR) classifier model was built and compared to the proposed CNN model. The paper showed very good results with 92.8% accuracy for one EEG four-class MI set and 85.7% for another set. The results showed that the proposed CNN model outperforms multi-class linear discriminant analysis with an accuracy increase of 28.6% and 17.9% for both MI sets, respectively. Moreover, it has been shown that majority voting for five repetitions introduced an accuracy advantage of 15% and 17.2% for both EEG sets, compared with single trials. This confirms that increasing the number of trials for the same MI gesture improves the recognition accurac

    Towards Real-World BCI: CCSPNet, A Compact Subject-Independent Motor Imagery Framework

    Full text link
    A conventional subject-dependent (SD) brain-computer interface (BCI) requires a complete data-gathering, training, and calibration phase for each user before it can be used. In recent years, a number of subject-independent (SI) BCIs have been developed. However, there are many problems preventing them from being used in real-world BCI applications. A weaker performance compared to the subject-dependent (SD) approach, and a relatively large model requiring high computational power are the most important ones. Therefore, a potential real-world BCI would greatly benefit from a compact low-power subject-independent BCI framework, ready to be used immediately after the user puts it on. To move towards this goal, we propose a novel subject-independent BCI framework named CCSPNet (Convolutional Common Spatial Pattern Network) trained on the motor imagery (MI) paradigm of a large-scale electroencephalography (EEG) signals database consisting of 21600 trials for 54 subjects performing two-class hand-movement MI tasks. The proposed framework applies a wavelet kernel convolutional neural network (WKCNN) and a temporal convolutional neural network (TCNN) in order to represent and extract the diverse spectral features of EEG signals. The outputs of the convolutional layers go through a common spatial pattern (CSP) algorithm for spatial feature extraction. The number of CSP features is reduced by a dense neural network, and the final class label is determined by a linear discriminative analysis (LDA) classifier. The CCSPNet framework evaluation results show that it is possible to have a low-power compact BCI that achieves both SD and SI performance comparable to complex and computationally expensive.Comment: 15 pages, 6 figures, 6 tables, 1 algorith

    Brain informed transfer learning for categorizing construction hazards

    Full text link
    A transfer learning paradigm is proposed for "knowledge" transfer between the human brain and convolutional neural network (CNN) for a construction hazard categorization task. Participants' brain activities are recorded using electroencephalogram (EEG) measurements when viewing the same images (target dataset) as the CNN. The CNN is pretrained on the EEG data and then fine-tuned on the construction scene images. The results reveal that the EEG-pretrained CNN achieves a 9 % higher accuracy compared with a network with same architecture but randomly initialized parameters on a three-class classification task. Brain activity from the left frontal cortex exhibits the highest performance gains, thus indicating high-level cognitive processing during hazard recognition. This work is a step toward improving machine learning algorithms by learning from human-brain signals recorded via a commercially available brain-computer interface. More generalized visual recognition systems can be effectively developed based on this approach of "keep human in the loop"

    Brain Music : Sistema generativo para la creación de música simbólica a partir de respuestas neuronales afectivas

    Get PDF
    gráficas, tablasEsta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente)This master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap DatasetMaestríaMagíster en Ingeniería - Automatización IndustrialInvestigación en Aprendizaje Profundo y señales BiológicasEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizale

    EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces

    Full text link
    In recent years, deep learning (DL) has contributed significantly to the improvement of motor-imagery brain-machine interfaces (MI-BMIs) based on electroencephalography(EEG). While achieving high classification accuracy, DL models have also grown in size, requiring a vast amount of memory and computational resources. This poses a major challenge to an embedded BMI solution that guarantees user privacy, reduced latency, and low power consumption by processing the data locally. In this paper, we propose EEG-TCNet, a novel temporal convolutional network (TCN) that achieves outstanding accuracy while requiring few trainable parameters. Its low memory footprint and low computational complexity for inference make it suitable for embedded classification on resource-limited devices at the edge. Experimental results on the BCI Competition IV-2a dataset show that EEG-TCNet achieves 77.35% classification accuracy in 4-class MI. By finding the optimal network hyperparameters per subject, we further improve the accuracy to 83.84%. Finally, we demonstrate the versatility of EEG-TCNet on the Mother of All BCI Benchmarks (MOABB), a large scale test benchmark containing 12 different EEG datasets with MI experiments. The results indicate that EEG-TCNet successfully generalizes beyond one single dataset, outperforming the current state-of-the-art (SoA) on MOABB by a meta-effect of 0.25.Comment: 8 pages, 6 figures, 5 table
    • …
    corecore