217 research outputs found

    Subject-Independent Detection of Yes/No Decisions Using EEG Recordings During Motor Imagery Tasks: A Novel Machine-Learning Approach with Fine-Graded EEG Spectrum

    Get PDF
    The classification of sensorimotor rhythms in electroencephalography signals can enable paralyzed individuals, for example, to make yes/no decisions. In practice, these approaches are hard to implement due to the variability of electroencephalography signals between and within subjects. Therefore, we report a novel and fast machine learning model, meeting the need for efficiency and reliability as well as low calibration and training time. Our model extracts finely graded frequency bands from motor imagery electroencephalography data by using power spectral density and training a random forest algorithm for classification. The goal was to create a non-invasive generalizable method by training the algorithm with subject-independent EEG data. We evaluate our approach using one of the currently largest publicly available electroencephalography datasets. With a balanced accuracy of 73.94%, our novel algorithm outperforms other state-of-the-art non-subject-dependent algorithms

    Graph Neural Network-based EEG Classification:A Survey

    Get PDF
    Graph neural networks (GNN) are increasingly used to classify EEG for tasks such as emotion recognition, motor imagery and neurological diseases and disorders. A wide range of methods have been proposed to design GNN-based classifiers. Therefore, there is a need for a systematic review and categorisation of these approaches. We exhaustively search the published literature on this topic and derive several categories for comparison. These categories highlight the similarities and differences among the methods. The results suggest a prevalence of spectral graph convolutional layers over spatial. Additionally, we identify standard forms of node features, with the most popular being the raw EEG signal and differential entropy. Our results summarise the emerging trends in GNN-based approaches for EEG classification. Finally, we discuss several promising research directions, such as exploring the potential of transfer learning methods and appropriate modelling of cross-frequency interactions.</p

    EEG and ECoG features for Brain Computer Interface in Stroke Rehabilitation

    Get PDF
    The ability of non-invasive Brain-Computer Interface (BCI) to control an exoskeleton was used for motor rehabilitation in stroke patients or as an assistive device for the paralyzed. However, there is still a need to create a more reliable BCI that could be used to control several degrees of Freedom (DoFs) that could improve rehabilitation results. Decoding different movements from the same limb, high accuracy and reliability are some of the main difficulties when using conventional EEG-based BCIs and the challenges we tackled in this thesis. In this PhD thesis, we investigated that the classification of several functional hand reaching movements from the same limb using EEG is possible with acceptable accuracy. Moreover, we investigated how the recalibration could affect the classification results. For this reason, we tested the recalibration in each multi-class decoding for within session, recalibrated between-sessions, and between sessions. It was shown the great influence of recalibrating the generated classifier with data from the current session to improve stability and reliability of the decoding. Moreover, we used a multiclass extension of the Filter Bank Common Spatial Patterns (FBCSP) to improve the decoding accuracy based on features and compared it to our previous study using CSP. Sensorimotor-rhythm-based BCI systems have been used within the same frequency ranges as a way to influence brain plasticity or controlling external devices. However, neural oscillations have shown to synchronize activity according to motor and cognitive functions. For this reason, the existence of cross-frequency interactions produces oscillations with different frequencies in neural networks. In this PhD, we investigated for the first time the existence of cross-frequency coupling during rest and movement using ECoG in chronic stroke patients. We found that there is an exaggerated phase-amplitude coupling between the phase of alpha frequency and the amplitude of gamma frequency, which can be used as feature or target for neurofeedback interventions using BCIs. This coupling has been also reported in another neurological disorder affecting motor function (Parkinson and dystonia) but, to date, it has not been investigated in stroke patients. This finding might change the future design of assistive or therapeuthic BCI systems for motor restoration in stroke patients

    An embedding for EEG signals learned using a triplet loss

    Full text link
    Neurophysiological time series recordings like the electroencephalogram (EEG) or local field potentials are obtained from multiple sensors. They can be decoded by machine learning models in order to estimate the ongoing brain state of a patient or healthy user. In a brain-computer interface (BCI), this decoded brain state information can be used with minimal time delay to either control an application, e.g., for communication or for rehabilitation after stroke, or to passively monitor the ongoing brain state of the subject, e.g., in a demanding work environment. A specific challenge in such decoding tasks is posed by the small dataset sizes in BCI compared to other domains of machine learning like computer vision or natural language processing. A possibility to tackle classification or regression problems in BCI despite small training data sets is through transfer learning, which utilizes data from other sessions, subjects or even datasets to train a model. In this exploratory study, we propose novel domain-specific embeddings for neurophysiological data. Our approach is based on metric learning and builds upon the recently proposed ladder loss. Using embeddings allowed us to benefit, both from the good generalisation abilities and robustness of deep learning and from the fast training of classical machine learning models for subject-specific calibration. In offline analyses using EEG data of 14 subjects, we tested the embeddings' feasibility and compared their efficiency with state-of-the-art deep learning models and conventional machine learning pipelines. In summary, we propose the use of metric learning to obtain pre-trained embeddings of EEG-BCI data as a means to incorporate domain knowledge and to reach competitive performance on novel subjects with minimal calibration requirements.Comment: 23 pages, 11 figures, 5 appendix pages, 6 appendix figures, work conducted in 2020-2021 during an ARPE (https://ens-paris-saclay.fr/en/masters/ens-paris-saclay-degree/year-pre-doctoral-research-abroad-arpe

    Enhancement of Robot-Assisted Rehabilitation Outcomes of Post-Stroke Patients Using Movement-Related Cortical Potential

    Get PDF
    Post-stroke rehabilitation is essential for stroke survivors to help them regain independence and to improve their quality of life. Among various rehabilitation strategies, robot-assisted rehabilitation is an efficient method that is utilized more and more in clinical practice for motor recovery of post-stroke patients. However, excessive assistance from robotic devices during rehabilitation sessions can make patients perform motor training passively with minimal outcome. Towards the development of an efficient rehabilitation strategy, it is necessary to ensure the active participation of subjects during training sessions. This thesis uses the Electroencephalography (EEG) signal to extract the Movement-Related Cortical Potential (MRCP) pattern to be used as an indicator of the active engagement of stroke patients during rehabilitation training sessions. The MRCP pattern is also utilized in designing an adaptive rehabilitation training strategy that maximizes patients’ engagement. This project focuses on the hand motor recovery of post-stroke patients using the AMADEO rehabilitation device (Tyromotion GmbH, Austria). AMADEO is specifically developed for patients with fingers and hand motor deficits. The variations in brain activity are analyzed by extracting the MRCP pattern from the acquired EEG data during training sessions. Whereas, physical improvement in hand motor abilities is determined by two methods. One is clinical tests namely Fugl-Meyer Assessment (FMA) and Motor Assessment Scale (MAS) which include FMA-wrist, FMA-hand, MAS-hand movements, and MAS-advanced hand movements’ tests. The other method is the measurement of hand-kinematic parameters using the AMADEO assessment tool which contains hand strength measurements during flexion (force-flexion), and extension (force-extension), and Hand Range of Movement (HROM)

    Deep Learning in EEG: Advance of the Last Ten-Year Critical Period

    Get PDF
    Deep learning has achieved excellent performance in a wide range of domains, especially in speech recognition and computer vision. Relatively less work has been done for EEG, but there is still significant progress attained in the last decade. Due to the lack of a comprehensive and topic widely covered survey for deep learning in EEG, we attempt to summarize recent progress to provide an overview, as well as perspectives for future developments. We first briefly mention the artifacts removal for EEG signal and then introduce deep learning models that have been utilized in EEG processing and classification. Subsequently, the applications of deep learning in EEG are reviewed by categorizing them into groups such as brain-computer interface, disease detection, and emotion recognition. They are followed by the discussion, in which the pros and cons of deep learning are presented and future directions and challenges for deep learning in EEG are proposed. We hope that this paper could serve as a summary of past work for deep learning in EEG and the beginning of further developments and achievements of EEG studies based on deep learning

    Brain Music : Sistema generativo para la creación de música simbólica a partir de respuestas neuronales afectivas

    Get PDF
    gráficas, tablasEsta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente)This master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap DatasetMaestríaMagíster en Ingeniería - Automatización IndustrialInvestigación en Aprendizaje Profundo y señales BiológicasEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizale
    corecore