1,507 research outputs found

    The cost of space independence in P300-BCI spellers.

    Get PDF
    Background: Though non-invasive EEG-based Brain Computer Interfaces (BCI) have been researched extensively over the last two decades, most designs require control of spatial attention and/or gaze on the part of the user. Methods: In healthy adults, we compared the offline performance of a space-independent P300-based BCI for spelling words using Rapid Serial Visual Presentation (RSVP), to the well-known space-dependent Matrix P300 speller. Results: EEG classifiability with the RSVP speller was as good as with the Matrix speller. While the Matrix speller’s performance was significantly reliant on early, gaze-dependent Visual Evoked Potentials (VEPs), the RSVP speller depended only on the space-independent P300b. However, there was a cost to true spatial independence: the RSVP speller was less efficient in terms of spelling speed. Conclusions: The advantage of space independence in the RSVP speller was concomitant with a marked reduction in spelling efficiency. Nevertheless, with key improvements to the RSVP design, truly space-independent BCIs could approach efficiencies on par with the Matrix speller. With sufficiently high letter spelling rates fused with predictive language modelling, they would be viable for potential applications with patients unable to direct overt visual gaze or covert attentional focus

    Critical issues in state-of-the-art brain–computer interface signal processing

    Get PDF
    This paper reviews several critical issues facing signal processing for brain–computer interfaces (BCIs) and suggests several recent approaches that should be further examined. The topics were selected based on discussions held during the 4th International BCI Meeting at a workshop organized to review and evaluate the current state of, and issues relevant to, feature extraction and translation of field potentials for BCIs. The topics presented in this paper include the relationship between electroencephalography and electrocorticography, novel features for performance prediction, time-embedded signal representations, phase information, signal non-stationarity, and unsupervised adaptation

    Reading Your Own Mind: Dynamic Visualization of Real-Time Neural Signals

    Get PDF
    Brain Computer Interfaces: BCI) systems which allow humans to control external devices directly from brain activity, are becoming increasingly popular due to dramatic advances in the ability to both capture and interpret brain signals. Further advancing BCI systems is a compelling goal both because of the neurophysiology insights gained from deriving a control signal from brain activity and because of the potential for direct brain control of external devices in applications such as brain injury recovery, human prosthetics, and robotics. The dynamic and adaptive nature of the brain makes it difficult to create classifiers or control systems that will remain effective over time. However it is precisely these qualities that offer the potential to use feedback to build on simple features and create complex control features that are robust over time. This dissertation presents work that addresses these opportunities for the specific case of Electrocorticography: ECoG) recordings from clinical epilepsy patients. First, queued patient tasks were used to explore the predictive nature of both local and global features of the ECoG signal. Second, an algorithm was developed and tested for estimating the most informative features from naive observations of ECoG signal. Third, a software system was built and tested that facilitates real-time visualizations of ECoG signal patients and allows ECoG epilepsy patients to engage in an interactive BCI control feature screening process

    Brain Music : Sistema generativo para la creación de música simbólica a partir de respuestas neuronales afectivas

    Get PDF
    gráficas, tablasEsta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente)This master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap DatasetMaestríaMagíster en Ingeniería - Automatización IndustrialInvestigación en Aprendizaje Profundo y señales BiológicasEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizale

    Lightweight Machine Learning with Brain Signals

    Full text link
    Electroencephalography(EEG) signals are gaining popularity in Brain-Computer Interface(BCI) systems and neural engineering applications thanks to their portability and availability. Inevitably, the sensory electrodes on the entire scalp would collect signals irrelevant to the particular BCI task, increasing the risks of overfitting in machine learning-based predictions. While this issue is being addressed by scaling up the EEG datasets and handcrafting the complex predictive models, this also leads to increased computation costs. Moreover, the model trained for one set of subjects cannot easily be adapted to other sets due to inter-subject variability, which creates even higher over-fitting risks. Meanwhile, despite previous studies using either convolutional neural networks(CNNs) or graph neural networks(GNNs) to determine spatial correlations between brain regions, they fail to capture brain functional connectivity beyond physical proximity. To this end, we propose 1) removing task-irrelevant noises instead of merely complicating models; 2) extracting subject-invariant discriminative EEG encodings, by taking functional connectivity into account; 3) navigating and training deep learning model with the most critical EEG channels; 4) detecting most similar EEG segments with target subject to reduce the cost of computation as well as inter-subject variability. Specifically, we construct a task-adaptive graph representation of brain network based on topological functional connectivity rather than distance-based connections. Further, non-contributory EEG channels are excluded by selecting only functional regions relevant to the corresponding intention. Lastly, contributory EEG segments are detected by several similarity estimation metrics, we then evaluate and train our proposed framework upon detected EEG segments to compare the performance of different metrics in EEG BCI tasks. We empirically show that our proposed approach, SIFT-EEG, outperforms state-of-the-art, with around 4% and 7% improvements over CNN-based and GNN-based models, on performing motor imagery predictions. Also, the task-adaptive channel selection demonstrates similar predictive performance with only 20% of raw EEG data. Moreover, the best-performed metric can achieve a high level of accuracy with less than 9% training data, suggesting a possible shift in direction for future works other than simply scaling up the model

    Motor Imagery Decoding Using Ensemble Curriculum Learning and Collaborative Training

    Full text link
    Objective: In this work, we study the problem of cross-subject motor imagery (MI) decoding from electroenchephalography (EEG) data. Multi-subject EEG datasets present several kinds of domain shifts due to various inter-individual differences (e.g. brain anatomy, personality and cognitive profile). These domain shifts render multi-subject training a challenging task and also impede robust cross-subject generalization. Method: We propose a two-stage model ensemble architecture, built with multiple feature extractors (first stage) and a shared classifier (second stage), which we train end-to-end with two loss terms. The first loss applies curriculum learning, forcing each feature extractor to specialize to a subset of the training subjects and promoting feature diversity. The second loss is an intra-ensemble distillation objective that allows collaborative exchange of knowledge between the models of the ensemble. Results: We compare our method against several state-of-the-art techniques, conducting subject-independent experiments on two large MI datasets, namely Physionet and OpenBMI. Our algorithm outperforms all of the methods in both 5-fold cross-validation and leave-one-subject-out evaluation settings, using a substantially lower number of trainable parameters. Conclusion: We demonstrate that our model ensembling approach combining the powers of curriculum learning and collaborative training, leads to high learning capacity and robust performance. Significance: Our work addresses the issue of domain shifts in multi-subject EEG datasets, paving the way for calibration-free BCI systems.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Code: https://github.com/gzoumpourlis/Ensemble-M
    • …
    corecore