463 research outputs found

    A Novel Synergistic Model Fusing Electroencephalography and Functional Magnetic Resonance Imaging for Modeling Brain Activities

    Get PDF
    Study of the human brain is an important and very active area of research. Unraveling the way the human brain works would allow us to better understand, predict and prevent brain related diseases that affect a significant part of the population. Studying the brain response to certain input stimuli can help us determine the involved brain areas and understand the mechanisms that characterize behavioral and psychological traits. In this research work two methods used for the monitoring of brain activities, Electroencephalography (EEG) and functional Magnetic Resonance (fMRI) have been studied for their fusion, in an attempt to bridge together the advantages of each one. In particular, this work has focused in the analysis of a specific type of EEG and fMRI recordings that are related to certain events and capture the brain response under specific experimental conditions. Using spatial features of the EEG we can describe the temporal evolution of the electrical field recorded in the scalp of the head. This work introduces the use of Hidden Markov Models (HMM) for modeling the EEG dynamics. This novel approach is applied for the discrimination of normal and progressive Mild Cognitive Impairment patients with significant results. EEG alone is not able to provide the spatial localization needed to uncover and understand the neural mechanisms and processes of the human brain. Functional Magnetic Resonance imaging (fMRI) provides the means of localizing functional activity, without though, providing the timing details of these activations. Although, at first glance it is apparent that the strengths of these two modalities, EEG and fMRI, complement each other, the fusion of information provided from each one is a challenging task. A novel methodology for fusing EEG spatiotemporal features and fMRI features, based on Canonical Partial Least Squares (CPLS) is presented in this work. A HMM modeling approach is used in order to derive a novel feature-based representation of the EEG signal that characterizes the topographic information of the EEG. We use the HMM model in order to project the EEG data in the Fisher score space and use the Fisher score to describe the dynamics of the EEG topography sequence. The correspondence between this new feature and the fMRI is studied using CPLS. This methodology is applied for extracting features for the classification of a visual task. The results indicate that the proposed methodology is able to capture task related activations that can be used for the classification of mental tasks. Extensions on the proposed models are examined along with future research directions and applications

    Neurolinguistics Research Advancing Development of a Direct-Speech Brain-Computer Interface

    Get PDF
    A direct-speech brain-computer interface (DS-BCI) acquires neural signals corresponding to imagined speech, then processes and decodes these signals to produce a linguistic output in the form of phonemes, words, or sentences. Recent research has shown the potential of neurolinguistics to enhance decoding approaches to imagined speech with the inclusion of semantics and phonology in experimental procedures. As neurolinguistics research findings are beginning to be incorporated within the scope of DS-BCI research, it is our view that a thorough understanding of imagined speech, and its relationship with overt speech, must be considered an integral feature of research in this field. With a focus on imagined speech, we provide a review of the most important neurolinguistics research informing the field of DS-BCI and suggest how this research may be utilized to improve current experimental protocols and decoding techniques. Our review of the literature supports a cross-disciplinary approach to DS-BCI research, in which neurolinguistics concepts and methods are utilized to aid development of a naturalistic mode of communication. : Cognitive Neuroscience; Computer Science; Hardware Interface Subject Areas: Cognitive Neuroscience, Computer Science, Hardware Interfac

    Event-related potential studies of somatosensory detection and discrimination.

    Get PDF
    This thesis contains four studies, the first examining methodology issues and four subsequent ones examining somatosensory cortical processing using event-related potentials (ERPs). The methodology section consists of 2 experiments. The first compared the latency variability in stimulus presentation between 3 computers. The second monitored the applied force of the vibration stimuli under experimental conditions to ensure that the chosen method for somatosensory stimulus presentation was consistent and reliable. The next study involved 3 experiments that aimed to characterize the mid to long latency somatosensory event-related potentials to different duration vibratory stimuli using both intracranial and scalp recording. The results revealed differences in the waveform morphology of the responses to and on-off responses, which had not previously been noted in the somatosensory system. The third and fourth studies each consisted of 2 experiments. These examined the discrimination between vibratory stimuli using an odd-ball paradigm to try to obtain a possible 'mismatch' response, similar to that reported in the auditory system. The aim of this study was to clarify some of the discrepancies in the literature surrounding the somatosensory mismatch response and to further characterize this response. The results from intracranial and scalp ERP recordings showed a two-component, negative-positive mismatch response over the anterior parietal region and a negative component over the superior pre-frontal region in response to changes in both frequency and duration. The negative component over the frontal region had never before been described. The last study explored possible interactions between somatosensory and auditory cortical potentials in response to spatially and temporally synchronized auditory and vibratory stimuli. The results showed clear interactions in the cortical responses to combined auditory and somatosensory stimuli in both standard and mismatch conditions

    Electroencephalography brain computer interface using an asynchronous protocol

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in ful llment of the requirements for the degree of Master of Science. October 31, 2016.Brain Computer Interface (BCI) technology is a promising new channel for communication between humans and computers, and consequently other humans. This technology has the potential to form the basis for a paradigm shift in communication for people with disabilities or neuro-degenerative ailments. The objective of this work is to create an asynchronous BCI that is based on a commercial-grade electroencephalography (EEG) sensor. The BCI is intended to allow a user of possibly low income means to issue control signals to a computer by using modulated cortical activation patterns as a control signal. The user achieves this modulation by performing a mental task such as imagining waving the left arm until the computer performs the action intended by the user. In our work, we make use of the Emotiv EPOC headset to perform the EEG measurements. We validate our models by assessing their performance when the experimental data is collected using clinical-grade EEG technology. We make use of a publicly available data-set in the validation phase. We apply signal processing concepts to extract the power spectrum of each electrode from the EEG time-series data. In particular, we make use of the fast Fourier transform (FFT). Specific bands in the power spectra are used to construct a vector that represents an abstract state the brain is in at that particular moment. The selected bands are motivated by insights from neuroscience. The state vector is used in conjunction with a model that performs classification. The exact purpose of the model is to associate the input data with an abstract classification result which can then used to select the appropriate set of instructions to be executed by the computer. In our work, we make use of probabilistic graphical models to perform this association. The performance of two probabilistic graphical models is evaluated in this work. As a preliminary step, we perform classification on pre-segmented data and we assess the performance of the hidden conditional random fields (HCRF) model. The pre-segmented data has a trial structure such that each data le contains the power spectra measurements associated with only one mental task. The objective of the assessment is to determine how well the HCRF models the spatio-spectral and temporal relationships in the EEG data when mental tasks are performed in the aforementioned manner. In other words, the HCRF is to model the internal dynamics of the data corresponding to the mental task. The performance of the HCRF is assessed over three and four classes. We find that the HCRF can model the internal structure of the data corresponding to different mental tasks. As the final step, we perform classification on continuous data that is not segmented and assess the performance of the latent dynamic conditional random fields (LDCRF). The LDCRF is used to perform sequence segmentation and labeling at each time-step so as to allow the program to determine which action should be taken at that moment. The sequence segmentation and labeling is the primary capability that we require in order to facilitate an asynchronous BCI protocol. The continuous data has a trial structure such that each data le contains the power spectra measurements associated with three different mental tasks. The mental tasks are randomly selected at 15 second intervals. The objective of the assessment is to determine how well the LDCRF models the spatio-spectral and temporal relationships in the EEG data, both within each mental task and in the transitions between mental tasks. The performance of the LDCRF is assessed over three classes for both the publicly available data and the data we obtained using the Emotiv EPOC headset. We find that the LDCRF produces a true positive classification rate of 82.31% averaged over three subjects, on the validation data which is in the publicly available data. On the data collected using the Emotiv EPOC, we find that the LDCRF produces a true positive classification rate of 42.55% averaged over two subjects. In the two assessments involving the LDCRF, the random classification strategy would produce a true positive classification rate of 33.34%. It is thus clear that our classification strategy provides above random performance on the two groups of data-sets. We conclude that our results indicate that creating low-cost EEG based BCI technology holds potential for future development. However, as discussed in the final chapter, further work on both the software and low-cost hardware aspects is required in order to improve the performance of the technology as it relates to the low-cost context.LG201

    Leveraging Artificial Intelligence to Improve EEG-fNIRS Data Analysis

    Get PDF
    La spectroscopie proche infrarouge fonctionnelle (fNIRS) est apparue comme une technique de neuroimagerie qui permet une surveillance non invasive et à long terme de l'hémodynamique corticale. Les technologies de neuroimagerie multimodale en milieu clinique permettent d'étudier les maladies neurologiques aiguës et chroniques. Dans ce travail, nous nous concentrons sur l'épilepsie - un trouble chronique du système nerveux central affectant près de 50 millions de personnes dans le monde entier prédisposant les individus affectés à des crises récurrentes. Les crises sont des aberrations transitoires de l'activité électrique du cerveau qui conduisent à des symptômes physiques perturbateurs tels que des changements aigus ou chroniques des compétences cognitives, des hallucinations sensorielles ou des convulsions de tout le corps. Environ un tiers des patients épileptiques sont récalcitrants au traitement pharmacologique et ces crises intraitables présentent un risque grave de blessure et diminuent la qualité de vie globale. Dans ce travail, nous étudions 1. l'utilité des informations hémodynamiques dérivées des signaux fNIRS dans une tâche de détection des crises et les avantages qu'elles procurent dans un environnement multimodal par rapport aux signaux électroencéphalographiques (EEG) seuls, et 2. la capacité des signaux neuronaux, dérivé de l'EEG, pour prédire l'hémodynamique dans le cerveau afin de mieux comprendre le cerveau épileptique. Sur la base de données rétrospectives EEG-fNIRS recueillies auprès de 40 patients épileptiques et utilisant de nouveaux modèles d'apprentissage en profondeur, la première étude de cette thèse suggère que les signaux fNIRS offrent une sensibilité et une spécificité accrues pour la détection des crises par rapport à l'EEG seul. La validation du modèle a été effectuée à l'aide de l'ensemble de données CHBMIT open source documenté et bien référencé avant d'utiliser notre ensemble de données EEG-fNIRS multimodal interne. Les résultats de cette étude ont démontré que fNIRS améliore la détection des crises par rapport à l'EEG seul et ont motivé les expériences ultérieures qui ont déterminé la capacité prédictive d'un modèle d'apprentissage approfondi développé en interne pour décoder les signaux d'état de repos hémodynamique à partir du spectre complet et d'une bande de fréquences neuronale codée spécifique signaux d'état de repos (signaux sans crise). Ces résultats suggèrent qu'un autoencodeur multimodal peut apprendre des relations multimodales pour prédire les signaux d'état de repos. Les résultats suggèrent en outre que des gammes de fréquences EEG plus élevées prédisent l'hémodynamique avec une erreur de reconstruction plus faible par rapport aux gammes de fréquences EEG plus basses. De plus, les connexions fonctionnelles montrent des modèles spatiaux similaires entre l'état de repos expérimental et les prédictions fNIRS du modèle. Cela démontre pour la première fois que l'auto-encodage intermodal à partir de signaux neuronaux peut prédire l'hémodynamique cérébrale dans une certaine mesure. Les résultats de cette thèse avancent le potentiel de l'utilisation d'EEG-fNIRS pour des tâches cliniques pratiques (détection des crises, prédiction hémodynamique) ainsi que l'examen des relations fondamentales présentes dans le cerveau à l'aide de modèles d'apprentissage profond. S'il y a une augmentation du nombre d'ensembles de données disponibles à l'avenir, ces modèles pourraient être en mesure de généraliser les prédictions qui pourraient éventuellement conduire à la technologie EEG-fNIRS à être utilisée régulièrement comme un outil clinique viable dans une grande variété de troubles neuropathologiques.----------ABSTRACT Functional near-infrared spectroscopy (fNIRS) has emerged as a neuroimaging technique that allows for non-invasive and long-term monitoring of cortical hemodynamics. Multimodal neuroimaging technologies in clinical settings allow for the investigation of acute and chronic neurological diseases. In this work, we focus on epilepsy—a chronic disorder of the central nervous system affecting almost 50 million people world-wide predisposing affected individuals to recurrent seizures. Seizures are transient aberrations in the brain's electrical activity that lead to disruptive physical symptoms such as acute or chronic changes in cognitive skills, sensory hallucinations, or whole-body convulsions. Approximately a third of epileptic patients are recalcitrant to pharmacological treatment and these intractable seizures pose a serious risk for injury and decrease overall quality of life. In this work, we study 1) the utility of hemodynamic information derived from fNIRS signals in a seizure detection task and the benefit they provide in a multimodal setting as compared to electroencephalographic (EEG) signals alone, and 2) the ability of neural signals, derived from EEG, to predict hemodynamics in the brain in an effort to better understand the epileptic brain. Based on retrospective EEG-fNIRS data collected from 40 epileptic patients and utilizing novel deep learning models, the first study in this thesis suggests that fNIRS signals offer increased sensitivity and specificity metrics for seizure detection when compared to EEG alone. Model validation was performed using the documented open source and well referenced CHBMIT dataset before using our in-house multimodal EEG-fNIRS dataset. The results from this study demonstrated that fNIRS improves seizure detection as compared to EEG alone and motivated the subsequent experiments which determined the predictive capacity of an in-house developed deep learning model to decode hemodynamic resting state signals from full spectrum and specific frequency band encoded neural resting state signals (seizure free signals). These results suggest that a multimodal autoencoder can learn multimodal relations to predict resting state signals. Findings further suggested that higher EEG frequency ranges predict hemodynamics with lower reconstruction error in comparison to lower EEG frequency ranges. Furthermore, functional connections show similar spatial patterns between experimental resting state and model fNIRS predictions. This demonstrates for the first time that intermodal autoencoding from neural signals can predict cerebral hemodynamics to a certain extent. The results of this thesis advance the potential of using EEG-fNIRS for practical clinical tasks (seizure detection, hemodynamic prediction) as well as examining fundamental relationships present in the brain using deep learning models. If there is an increase in the number of datasets available in the future, these models may be able to generalize predictions which would possibly lead to EEG-fNIRS technology to be routinely used as a viable clinical tool in a wide variety of neuropathological disorders

    Understanding and Decoding Imagined Speech using Electrocorticographic Recordings in Humans

    Get PDF
    Certain brain disorders, resulting from brainstem infarcts, traumatic brain injury, stroke and amyotrophic lateral sclerosis, limit verbal communication despite the patient being fully aware. People that cannot communicate due to neurological disorders would benefit from a system that can infer internal speech directly from brain signals. Investigating how the human cortex encodes imagined speech remains a difficult challenge, due to the lack of behavioral and observable measures. As a consequence, the fine temporal properties of speech cannot be synchronized precisely with brain signals during internal subjective experiences, like imagined speech. This thesis aims at understanding and decoding the neural correlates of imagined speech (also called internal speech or covert speech), for targeting speech neuroprostheses. In this exploratory work, various imagined speech features, such as acoustic sound features, phonetic representations, and individual words were investigated and decoded from electrocorticographic signals recorded in epileptic patients in three different studies. This recording technique provides high spatiotemporal resolution, via electrodes placed beneath the skull, but without penetrating the cortex In the first study, we reconstructed continuous spectrotemporal acoustic features from brain signals recorded during imagined speech using cross-condition linear regression. Using this technique, we showed that significant acoustic features of imagined speech could be reconstructed in seven patients. In the second study, we decoded continuous phoneme sequences from brain signals recorded during imagined speech using hidden Markov models. This technique allowed incorporating a language model that defined phoneme transitions probabilities. In this preliminary study, decoding accuracy was significant across eight phonemes in one patients. In the third study, we classified individual words from brain signals recorded during an imagined speech word repetition task, using support-vector machines. To account for temporal irregularities during speech production, we introduced a non-linear time alignment into the classification framework. Classification accuracy was significant across five patients. In order to compare speech representations across conditions and integrate imagined speech into the general speech network, we investigated imagined speech in parallel with overt speech production and/or speech perception. Results shared across the three studies showed partial overlapping between imagined speech and speech perception/production in speech areas, such as superior temporal lobe, anterior frontal gyrus and sensorimotor cortex. In an attempt to understanding higher-level cognitive processing of auditory processes, we also investigated the neural encoding of acoustic features during music imagery using linear regression. Despite this study was not directly related to speech representations, it provided a unique opportunity to quantitatively study features of inner subjective experiences, similar to speech imagery. These studies demonstrated the potential of using predictive models for basic decoding of speech features. Despite low performance, results show the feasibility for direct decoding of natural speech. In this respect, we highlighted numerous challenges that were encountered, and suggested new avenues to improve performances
    • …
    corecore