138 research outputs found

    ESTformer: Transformer Utilizing Spatiotemporal Dependencies for EEG Super-resolution

    Full text link
    Towards practical applications of Electroencephalography (EEG) data, lightweight acquisition devices, equipped with a few electrodes, result in a predicament where analysis methods can only leverage EEG data with extremely low spatial resolution. Recent methods mainly focus on using mathematical interpolation methods and Convolutional Neural Networks for EEG super-resolution (SR), but they suffer from high computation costs, extra bias, and few insights in spatiotemporal dependency modeling. To this end, we propose the ESTformer, an EEG SR framework utilizing spatiotemporal dependencies based on the Transformer. The ESTformer applies positional encoding methods and the Multi-head Self-attention mechanism to the space and time dimensions, which can learn spatial structural information and temporal functional variation. The ESTformer, with the fixed masking strategy, adopts a mask token to up-sample the low-resolution (LR) EEG data in case of disturbance from mathematical interpolation methods. On this basis, we design various Transformer blocks to construct the Spatial Interpolation Module (SIM) and the Temporal Reconstruction Module (TRM). Finally, the ESTformer cascades the SIM and the TRM to capture and model spatiotemporal dependencies for EEG SR with fidelity. Extensive experimental results on two EEG datasets show the effectiveness of the ESTformer against previous state-of-the-art methods and verify the superiority of the SR data to the LR data in EEG-based downstream tasks of person identification and emotion recognition. The proposed ESTformer demonstrates the versatility of the Transformer for EEG SR tasks

    Assigning channel weights using an attention mechanism: an EEG interpolation algorithm

    Get PDF
    During the acquisition of electroencephalographic (EEG) signals, various factors can influence the data and lead to the presence of one or multiple bad channels. Bad channel interpolation is the use of good channels data to reconstruct bad channel, thereby maintaining the original dimensions of the data for subsequent analysis tasks. The mainstream interpolation algorithm assigns weights to channels based on the physical distance of the electrodes and does not take into account the effect of physiological factors on the EEG signal. The algorithm proposed in this study utilizes an attention mechanism to allocate channel weights (AMACW). The model gets the correlation among channels by learning from good channel data. Interpolation assigns weights based on learned correlations without the need for electrode location information, solving the difficulty that traditional methods cannot interpolate bad channels at unknown locations. To avoid an overly concentrated weight distribution of the model when generating data, we designed the channel masking (CM). This method spreads attention and allows the model to utilize data from multiple channels. We evaluate the reconstruction performance of the model using EEG data with 1 to 5 bad channels. With EEGLAB’s interpolation method as a performance reference, tests have shown that the AMACW models can effectively reconstruct bad channels

    Towards a data-driven treatment of epilepsy: computational methods to overcome low-data regimes in clinical settings

    Get PDF
    Epilepsy is the most common neurological disorder, affecting around 1 % of the population. One third of patients with epilepsy are drug-resistant. If the epileptogenic zone can be localized precisely, curative resective surgery may be performed. However, only 40 to 70 % of patients remain seizure-free after surgery. Presurgical evaluation, which in part aims to localize the epileptogenic zone (EZ), is a complex multimodal process that requires subjective clinical decisions, often relying on a multidisciplinary team’s experience. Thus, the clinical pathway could benefit from data-driven methods for clinical decision support. In the last decade, deep learning has seen great advancements due to the improvement of graphics processing units (GPUs), the development of new algorithms and the large amounts of generated data that become available for training. However, using deep learning in clinical settings is challenging as large datasets are rare due to privacy concerns and expensive annotation processes. Methods to overcome the lack of data are especially important in the context of presurgical evaluation of epilepsy, as only a small proportion of patients with epilepsy end up undergoing surgery, which limits the availability of data to learn from. This thesis introduces computational methods that pave the way towards integrating data-driven methods into the clinical pathway for the treatment of epilepsy, overcoming the challenge presented by the relatively small datasets available. We used transfer learning from general-domain human action recognition to characterize epileptic seizures from video–telemetry data. We developed a software framework to predict the location of the epileptogenic zone given seizure semiologies, based on retrospective information from the literature. We trained deep learning models using self-supervised and semi-supervised learning to perform quantitative analysis of resective surgery by segmenting resection cavities on brain magnetic resonance images (MRIs). Throughout our work, we shared datasets and software tools that will accelerate research in medical image computing, particularly in the field of epilepsy

    Visual Exploration And Information Analytics Of High-Dimensional Medical Images

    Get PDF
    Data visualization has transformed how we analyze increasingly large and complex data sets. Advanced visual tools logically represent data in a way that communicates the most important information inherent within it and culminate the analysis with an insightful conclusion. Automated analysis disciplines - such as data mining, machine learning, and statistics - have traditionally been the most dominant fields for data analysis. It has been complemented with a near-ubiquitous adoption of specialized hardware and software environments that handle the storage, retrieval, and pre- and postprocessing of digital data. The addition of interactive visualization tools allows an active human participant in the model creation process. The advantage is a data-driven approach where the constraints and assumptions of the model can be explored and chosen based on human insight and confirmed on demand by the analytic system. This translates to a better understanding of data and a more effective knowledge discovery. This trend has become very popular across various domains, not limited to machine learning, simulation, computer vision, genetics, stock market, data mining, and geography. In this dissertation, we highlight the role of visualization within the context of medical image analysis in the field of neuroimaging. The analysis of brain images has uncovered amazing traits about its underlying dynamics. Multiple image modalities capture qualitatively different internal brain mechanisms and abstract it within the information space of that modality. Computational studies based on these modalities help correlate the high-level brain function measurements with abnormal human behavior. These functional maps are easily projected in the physical space through accurate 3-D brain reconstructions and visualized in excellent detail from different anatomical vantage points. Statistical models built for comparative analysis across subject groups test for significant variance within the features and localize abnormal behaviors contextualizing the high-level brain activity. Currently, the task of identifying the features is based on empirical evidence, and preparing data for testing is time-consuming. Correlations among features are usually ignored due to lack of insight. With a multitude of features available and with new emerging modalities appearing, the process of identifying the salient features and their interdependencies becomes more difficult to perceive. This limits the analysis only to certain discernible features, thus limiting human judgments regarding the most important process that governs the symptom and hinders prediction. These shortcomings can be addressed using an analytical system that leverages data-driven techniques for guiding the user toward discovering relevant hypotheses. The research contributions within this dissertation encompass multidisciplinary fields of study not limited to geometry processing, computer vision, and 3-D visualization. However, the principal achievement of this research is the design and development of an interactive system for multimodality integration of medical images. The research proceeds in various stages, which are important to reach the desired goal. The different stages are briefly described as follows: First, we develop a rigorous geometry computation framework for brain surface matching. The brain is a highly convoluted structure of closed topology. Surface parameterization explicitly captures the non-Euclidean geometry of the cortical surface and helps derive a more accurate registration of brain surfaces. We describe a technique based on conformal parameterization that creates a bijective mapping to the canonical domain, where surface operations can be performed with improved efficiency and feasibility. Subdividing the brain into a finite set of anatomical elements provides the structural basis for a categorical division of anatomical view points and a spatial context for statistical analysis. We present statistically significant results of our analysis into functional and morphological features for a variety of brain disorders. Second, we design and develop an intelligent and interactive system for visual analysis of brain disorders by utilizing the complete feature space across all modalities. Each subdivided anatomical unit is specialized by a vector of features that overlap within that element. The analytical framework provides the necessary interactivity for exploration of salient features and discovering relevant hypotheses. It provides visualization tools for confirming model results and an easy-to-use interface for manipulating parameters for feature selection and filtering. It provides coordinated display views for visualizing multiple features across multiple subject groups, visual representations for highlighting interdependencies and correlations between features, and an efficient data-management solution for maintaining provenance and issuing formal data queries to the back end

    Multivariate word properties in fluency tasks reveal markers of Alzheimer’s dementia

    Get PDF
    Version of Record online: 12 October 2023INTRODUCTION Verbal fluency tasks are common in Alzheimer's disease (AD) assessments. Yet, standard valid response counts fail to reveal disease-specific semantic memory patterns. Here, we leveraged automated word-property analysis to capture neurocognitive markers of AD vis-à-vis behavioral variant frontotemporal dementia (bvFTD). METHODS Patients and healthy controls completed two fluency tasks. We counted valid responses and computed each word's frequency, granularity, neighborhood, length, familiarity, and imageability. These features were used for group-level discrimination, patient-level identification, and correlations with executive and neural (magnetic resonanance imaging [MRI], functional MRI [fMRI], electroencephalography [EEG]) patterns. RESULTS Valid responses revealed deficits in both disorders. Conversely, frequency, granularity, and neighborhood yielded robust group- and subject-level discrimination only in AD, also predicting executive outcomes. Disease-specific cortical thickness patterns were predicted by frequency in both disorders. Default-mode and salience network hypoconnectivity, and EEG beta hypoconnectivity, were predicted by frequency and granularity only in AD. DISCUSSION Word-property analysis of fluency can boost AD characterization and diagnosis. Highlights We report novel word-property analyses of verbal fluency in AD and bvFTD. Standard valid response counts captured deficits and brain patterns in both groups. Specific word properties (e.g., frequency, granularity) were altered only in AD. Such properties predicted cognitive and neural (MRI, fMRI, EEG) patterns in AD. Word-property analysis of fluency can boost AD characterization and diagnosis.National Institutes of Health, National Institutes of Aging, Grant/Award Numbers: R01AG057234, R01AG075775; ANID: FONDECYT Regular, Grant/Award Numbers: 1210176, 1210195, 1220995; FONDAP, Grant/Award Number: 15150012; PIA/ANILLOS, Grant/Award Number: ACT210096; FONDEF, Grant/Award Number: ID20I10152; GBHI, Alzheimer’s Association, and Alzheimer’s Society: Alzheimer’s Association GBHI, Grant/Award Number: ALZ UK-22-865742; Alzheimer’s Association, Grant/Award Number: SG-20-725707; Latin American Brain Health Institute (BrainLat), Universidad Adolfo Ibáñez, Santiago, Chile, Grant/Award Number: #BL-SRGP2021-01; Programa Interdisciplinario de Investigación Experimental en Comunicación y Cognición (PIIECC), Facultad de Humanidades, USACH; Takeda, Grant/Award Number: CW2680521; Rainwater Charitable Foundation; Tau Consortium; European Commission: H2020-MSCA-IF-GFMULTI-LAND, Grant/Award Number: 10102581

    Body into Narrative: Behavioral and Neurophysiological Signatures of Action Text Processing After Ecological Motor Training

    Get PDF
    Available online 8 November 2022Embodied cognition research indicates that sensorimotor training can influence action concept processing. Yet, most studies employ isolated (pseudo)randomized stimuli and require repetitive single-effector responses, thus lacking ecological validity. Moreover, the neural signatures of these effects remain poorly understood. Here, we examined whether immersive bodily training can modulate behavioral and functional connectivity correlates of action-verb processing in naturalistic narratives. The study involved three phases. First, in the Pretraining phase, 32 healthy persons listened to an action text (rich in movement descriptions) and a non-action text (focused on its characters’ perceptual and mental processes), completed comprehension questionnaires, and underwent resting-state electroencephalogram (EEG) recordings. Second, in the four-day Training phase, half the participants completed an exergaming intervention (eliciting full-body movements for 60 min a day) while the remaining half played static videogames (requiring no bodily engagement other than button presses). Finally, in the Post-training phase, all participants repeated the Pre-training protocol with different action and non-action texts and a new EEG session. We found that exergaming selectively reduced action-verb outcomes and frontoposterior functional connectivity in the motor-sensitive 10–20 Hz range, both patterns being positively correlated. Conversely, static videogame playing yielded no specific effect on any linguistic category and did not modulate functional connectivity. Together, these findings suggest that action-verb processing and key neural correlates can be focally influenced by full-body motor training in a highly ecological setting. Our study illuminates the role of situated experience and sensorimotor circuits in action-concept processing, addressing calls for naturalistic insights on language embodimentSabrina Cervetto acknowledges the support of Centro Interdisciplinario en Cognición para la Enseñanza y el Aprendizaje and Centro de Investigación Básica en Psicología. Lucía Amoruso is supported with funding from the European Commission (H2020-MSCA-IF-GF- 2020; Grant 101025814), Ikerbasque Foundation, and by the Spanish Ministry of Economy and Competitiveness through the Plan Nacional RTI2018- 096216-A-I00. Adolfo García is an Atlantic Fellow at the Global Brain Health Institute (GBHI) and is supported with funding from GBHI, Alzheimer’s Association, and Alzheimer’s Society (Alzheimer’s Association GBHI ALZ UK-22-865742); ANID, FONDECYT Regular (1210176); and Programa Interdisciplinario de Investigación Experimental en Comunicación y Cognición (PIIECC), Facultad de Humanidades, USACH

    Novel Methods to Incorporate Physiological Prior Knowledge into the Inverse Problem of Electrocardiography - Application to Localization of Ventricular Excitation Origins

    Get PDF
    17 Millionen Todesfälle jedes Jahr werden auf kardiovaskuläre Erkankungen zurückgeführt. Plötzlicher Herztod tritt bei ca. 25% der Patienten mit kardiovaskulären Erkrankungen auf und kann mit ventrikulärer Tachykardie in Verbindung gebracht werden. Ein wichtiger Schritt für die Behandlung von ventrikulärer Tachykardie ist die Detektion sogenannter Exit-Points, d.h. des räumlichen Ursprungs der Erregung. Da dieser Prozess sehr zeitaufwändig ist und nur von fähigen Kardiologen durchgeführt werden kann, gibt es eine Notwendigkeit für assistierende Lokalisationsmöglichkeiten, idealerweise automatisch und nichtinvasiv. Elektrokardiographische Bildgebung versucht, diesen klinischen Anforderungen zu genügen, indem die elektrische Aktivität des Herzens aus Messungen der Potentiale auf der Körperoberfläche rekonstruiert wird. Die resultierenden Informationen können verwendet werden, um den Erregungsursprung zu detektieren. Aktuelle Methoden um das inverse Problem zu lösen weisen jedoch entweder eine geringe Genauigkeit oder Robustheit auf, was ihren klinischen Nutzen einschränkt. Diese Arbeit analysiert zunächst das Vorwärtsproblem im Zusammenhang mit zwei Quellmodellen: Transmembranspannungen und extrazelluläre Potentiale. Die mathematischen Eigenschaften der Relation zwischen den Quellen des Herzens und der Körperoberflächenpotentiale werden systematisch analysiert und der Einfluss auf das inverse Problem verdeutlicht. Dieses Wissen wird anschließend zur Lösung des inversen Problems genutzt. Hierzu werden drei neue Methoden eingeführt: eine verzögerungsbasierte Regularisierung, eine Methode basierend auf einer Regression von Körperoberflächenpotentialen und eine Deep-Learning-basierte Lokalisierungsmethode. Diese drei Methoden werden in einem simulierten und zwei klinischen Setups vier etablierten Methoden gegenübergestellt und bewertet. Auf dem simulierten Datensatz und auf einem der beiden klinischen Datensätze erzielte eine der neuen Methoden bessere Ergebnisse als die konventionellen Ansätze, während Tikhonov-Regularisierung auf dem verbleibenden klinischen Datensatz die besten Ergebnisse erzielte. Potentielle Ursachen für diese Ergebnisse werden diskutiert und mit Eigenschaften des Vorwärtsproblems in Verbindung gebracht
    corecore