9,056 research outputs found
A general dual-pathway network for EEG denoising
IntroductionScalp electroencephalogram (EEG) analysis and interpretation are crucial for tracking and analyzing brain activity. The collected scalp EEG signals, however, are weak and frequently tainted with various sorts of artifacts. The models based on deep learning provide comparable performance with that of traditional techniques. However, current deep learning networks applied to scalp EEG noise reduction are large in scale and suffer from overfitting.MethodsHere, we propose a dual-pathway autoencoder modeling framework named DPAE for scalp EEG signal denoising and demonstrate the superiority of the model on multi-layer perceptron (MLP), convolutional neural network (CNN) and recurrent neural network (RNN), respectively. We validate the denoising performance on benchmark scalp EEG artifact datasets.ResultsThe experimental results show that our model architecture not only significantly reduces the computational effort but also outperforms existing deep learning denoising algorithms in root relative mean square error (RRMSE)metrics, both in the time and frequency domains.DiscussionThe DPAE architecture does not require a priori knowledge of the noise distribution nor is it limited by the network layer structure, which is a general network model oriented toward blind source separation
Computational techniques to interpret the neural code underlying complex cognitive processes
Advances in large-scale neural recording technology have significantly improved the
capacity to further elucidate the neural code underlying complex cognitive processes.
This thesis aimed to investigate two research questions in rodent models. First, what
is the role of the hippocampus in memory and specifically what is the underlying
neural code that contributes to spatial memory and navigational decision-making.
Second, how is social cognition represented in the medial prefrontal cortex at the
level of individual neurons. To start, the thesis begins by investigating memory and
social cognition in the context of healthy and diseased states that use non-invasive
methods (i.e. fMRI and animal behavioural studies). The main body of the thesis
then shifts to developing our fundamental understanding of the neural mechanisms
underpinning these cognitive processes by applying computational techniques to ana lyse stable large-scale neural recordings. To achieve this, tailored calcium imaging
and behaviour preprocessing computational pipelines were developed and optimised
for use in social interaction and spatial navigation experimental analysis. In parallel,
a review was conducted on methods for multivariate/neural population analysis. A
comparison of multiple neural manifold learning (NML) algorithms identified that non linear algorithms such as UMAP are more adaptable across datasets of varying noise
and behavioural complexity. Furthermore, the review visualises how NML can be
applied to disease states in the brain and introduces the secondary analyses that
can be used to enhance or characterise a neural manifold. Lastly, the preprocessing
and analytical pipelines were combined to investigate the neural mechanisms in volved in social cognition and spatial memory. The social cognition study explored
how neural firing in the medial Prefrontal cortex changed as a function of the social
dominance paradigm, the "Tube Test". The univariate analysis identified an ensemble
of behavioural-tuned neurons that fire preferentially during specific behaviours such
as "pushing" or "retreating" for the animalâs own behaviour and/or the competitorâs
behaviour. Furthermore, in dominant animals, the neural population exhibited greater
average firing than that of subordinate animals. Next, to investigate spatial memory,
a spatial recency task was used, where rats learnt to navigate towards one of three
reward locations and then recall the rewarded location of the session. During the
task, over 1000 neurons were recorded from the hippocampal CA1 region for five rats
over multiple sessions. Multivariate analysis revealed that the sequence of neurons encoding an animalâs spatial position leading up to a rewarded location was also active
in the decision period before the animal navigates to the rewarded location. The result
posits that prospective replay of neural sequences in the hippocampal CA1 region
could provide a mechanism by which decision-making is supported
Design and Evaluation of a Hardware System for Online Signal Processing within Mobile Brain-Computer Interfaces
Brain-Computer Interfaces (BCIs) sind innovative Systeme, die eine direkte Kommunikation zwischen dem Gehirn und externen GerĂ€ten ermöglichen. Diese Schnittstellen haben sich zu einer transformativen Lösung nicht nur fĂŒr Menschen mit neurologischen Verletzungen entwickelt, sondern auch fĂŒr ein breiteres Spektrum von Menschen, das sowohl medizinische als auch nicht-medizinische Anwendungen umfasst. In der Vergangenheit hat die Herausforderung, dass neurologische Verletzungen nach einer anfĂ€nglichen Erholungsphase statisch bleiben, die Forscher dazu veranlasst, innovative Wege zu beschreiten. Seit den 1970er Jahren stehen BCIs an vorderster Front dieser BemĂŒhungen. Mit den Fortschritten in der Forschung haben sich die BCI-Anwendungen erweitert und zeigen ein groĂes Potenzial fĂŒr eine Vielzahl von Anwendungen, auch fĂŒr weniger stark eingeschrĂ€nkte (zum Beispiel im Kontext von Hörelektronik) sowie völlig gesunde Menschen (zum Beispiel in der Unterhaltungsindustrie). Die Zukunft der BCI-Forschung hĂ€ngt jedoch auch von der VerfĂŒgbarkeit zuverlĂ€ssiger BCI-Hardware ab, die den Einsatz in der realen Welt gewĂ€hrleistet.
Das im Rahmen dieser Arbeit konzipierte und implementierte CereBridge-System stellt einen bedeutenden Fortschritt in der Brain-Computer-Interface-Technologie dar, da es die gesamte Hardware zur Erfassung und Verarbeitung von EEG-Signalen in ein mobiles System integriert. Die Architektur der Verarbeitungshardware basiert auf einem FPGA mit einem ARM Cortex-M3 innerhalb eines heterogenen ICs, was FlexibilitĂ€t und Effizienz bei der EEG-Signalverarbeitung gewĂ€hrleistet. Der modulare Aufbau des Systems, bestehend aus drei einzelnen Boards, gewĂ€hrleistet die Anpassbarkeit an unterschiedliche Anforderungen. Das komplette System wird an der Kopfhaut befestigt, kann autonom arbeiten, benötigt keine externe Interaktion und wiegt einschlieĂlich der 16-Kanal-EEG-Sensoren nur ca. 56 g. Der Fokus liegt auf voller MobilitĂ€t.
Das vorgeschlagene anpassbare Datenflusskonzept erleichtert die Untersuchung und nahtlose Integration von Algorithmen und erhöht die FlexibilitĂ€t des Systems. Dies wird auch durch die Möglichkeit unterstrichen, verschiedene Algorithmen auf EEG-Daten anzuwenden, um unterschiedliche Anwendungsziele zu erreichen. High-Level Synthesis (HLS) wurde verwendet, um die Algorithmen auf das FPGA zu portieren, was den Algorithmenentwicklungsprozess beschleunigt und eine schnelle Implementierung von Algorithmusvarianten ermöglicht. Evaluierungen haben gezeigt, dass das CereBridge-System in der Lage ist, die gesamte Signalverarbeitungskette zu integrieren, die fĂŒr verschiedene BCI-Anwendungen erforderlich ist. DarĂŒber hinaus kann es mit einer Batterie von mehr als 31 Stunden Dauerbetrieb betrieben werden, was es zu einer praktikablen Lösung fĂŒr mobile Langzeit-EEG-Aufzeichnungen und reale BCI-Studien macht.
Im Vergleich zu bestehenden Forschungsplattformen bietet das CereBridge-System eine bisher unerreichte LeistungsfĂ€higkeit und Ausstattung fĂŒr ein mobiles BCI. Es erfĂŒllt nicht nur die relevanten Anforderungen an ein mobiles BCI-System, sondern ebnet auch den Weg fĂŒr eine schnelle Ăbertragung von Algorithmen aus dem Labor in reale Anwendungen. Im Wesentlichen liefert diese Arbeit einen umfassenden Entwurf fĂŒr die Entwicklung und Implementierung eines hochmodernen mobilen EEG-basierten BCI-Systems und setzt damit einen neuen Standard fĂŒr BCI-Hardware, die in der Praxis eingesetzt werden kann.Brain-Computer Interfaces (BCIs) are innovative systems that enable direct communication between the brain and external devices. These interfaces have emerged as a transformative solution not only for individuals with neurological injuries, but also for a broader range of individuals, encompassing both medical and non-medical applications. Historically, the challenge of neurological injury being static after an initial recovery phase has driven researchers to explore innovative avenues. Since the 1970s, BCIs have been at one forefront of these efforts. As research has progressed, BCI applications have expanded, showing potential in a wide range of applications, including those for less severely disabled (e.g. in the context of hearing aids) and completely healthy individuals (e.g. entertainment industry). However, the future of BCI research also depends on the availability of reliable BCI hardware to ensure real-world application.
The CereBridge system designed and implemented in this work represents a significant leap forward in brain-computer interface technology by integrating all EEG signal acquisition and processing hardware into a mobile system. The processing hardware architecture is centered around an FPGA with an ARM Cortex-M3 within a heterogeneous IC, ensuring flexibility and efficiency in EEG signal processing. The modular design of the system, consisting of three individual boards, ensures adaptability to different requirements. With a focus on full mobility, the complete system is mounted on the scalp, can operate autonomously, requires no external interaction, and weighs approximately 56g, including 16 channel EEG sensors.
The proposed customizable dataflow concept facilitates the exploration and seamless integration of algorithms, increasing the flexibility of the system. This is further underscored by the ability to apply different algorithms to recorded EEG data to meet different application goals. High-Level Synthesis (HLS) was used to port algorithms to the FPGA, accelerating the algorithm development process and facilitating rapid implementation of algorithm variants. Evaluations have shown that the CereBridge system is capable of integrating the complete signal processing chain required for various BCI applications. Furthermore, it can operate continuously for more than 31 hours with a 1800mAh battery, making it a viable solution for long-term mobile EEG recording and real-world BCI studies.
Compared to existing research platforms, the CereBridge system offers unprecedented performance and features for a mobile BCI. It not only meets the relevant requirements for a mobile BCI system, but also paves the way for the rapid transition of algorithms from the laboratory to real-world applications. In essence, this work provides a comprehensive blueprint for the development and implementation of a state-of-the-art mobile EEG-based BCI system, setting a new benchmark in BCI hardware for real-world applicability
Research progress of CTC, ctDNA, and EVs in cancer liquid biopsy
Circulating tumor cells (CTCs), circulating tumor DNA (ctDNA), and extracellular vehicles (EVs) have received significant attention in recent times as emerging biomarkers and subjects of transformational studies. The three main branches of liquid biopsy have evolved from the three primary tumor liquid biopsy detection targetsâCTC, ctDNA, and EVsâeach with distinct benefits. CTCs are derived from circulating cancer cells from the original tumor or metastases and may display global features of the tumor. ctDNA has been extensively analyzed and has been used to aid in the diagnosis, treatment, and prognosis of neoplastic diseases. EVs contain tumor-derived material such as DNA, RNA, proteins, lipids, sugar structures, and metabolites. The three provide different detection contents but have strong complementarity to a certain extent. Even though they have already been employed in several clinical trials, the clinical utility of three biomarkers is still being studied, with promising initial findings. This review thoroughly overviews established and emerging technologies for the isolation, characterization, and content detection of CTC, ctDNA, and EVs. Also discussed were the most recent developments in the study of potential liquid biopsy biomarkers for cancer diagnosis, therapeutic monitoring, and prognosis prediction. These included CTC, ctDNA, and EVs. Finally, the potential and challenges of employing liquid biopsy based on CTC, ctDNA, and EVs for precision medicine were evaluated
Unveiling the alterations in the frequency-dependent connectivity structure of MEG signals in mild cognitive impairment and Alzheimerâs disease
ProducciĂłn CientĂficaMild cognitive impairment (MCI) and dementia due to Alzheimerâs disease (AD) are neurological disorders that affect cognition, brain function, and memory. Magnetoencephalography (MEG) is a neuroimaging technique used to study changes in brain oscillations caused by neural pathologies. However, MEG studies often use fixed frequency bands, assuming a common frequency structure and overlooking both subject-specific variations and the potential influence of pathologies on frequency distribution. To address this issue, a novel methodology called Connectivity-based Meta-Bands (CMB) was applied to obtain a subject-specific functional connectivity-based frequency bands segmentation. Resting-state MEG activity was acquired from 161 participants: 67 healthy controls, 44 MCI patients, and 50 AD patients. The CMB algorithm was used to identify âmeta-bandsâ (i.e., recurrent network topologies across frequencies). The meta-bands were used to extract an individualised frequency band segmentation. The network topology of the meta-bands and their sequencing were analysed to identify alterations associated with MCI and AD in the underlying frequency-dependent connectivity structure. We found that MCI and AD alter the neural network topology, leading to connectivity patterns both more widespread in the frequency spectrum and heterogeneous. Furthermore, the meta-band frequency sequencing was modified, with MCI and AD patients exhibiting sequences with increased complexity, suggesting a progressive dilution of the frequency structure. The study highlights the relevance of considering the impact of neural pathologies on the frequency-dependent connectivity structure and the potential bias introduced by using fixed frequency bands in MEG studies.BioingenierĂa, Biomateriales y Nanomedicina (CIBER-BBN)â through âInstituto de Salud Carlos IIIâ- FEDERERA-Net FLAG-ERA JTC2021 project ModelDXConsciousness (Human Brain Project Partnering Project
Deep Learning Techniques for Electroencephalography Analysis
In this thesis we design deep learning techniques for training deep neural networks on electroencephalography (EEG) data and in particular on two problems, namely EEG-based motor imagery decoding and EEG-based affect recognition, addressing challenges associated with them. Regarding the problem of motor imagery (MI) decoding, we first consider the various kinds of domain shifts in the EEG signals, caused by inter-individual differences (e.g. brain anatomy, personality and cognitive profile). These domain shifts render multi-subject training a challenging task and impede robust cross-subject generalization. We build a two-stage model ensemble architecture and propose two objectives to train it, combining the strengths of curriculum learning and collaborative training. Our subject-independent experiments on the large datasets of Physionet and OpenBMI, verify the effectiveness of our approach. Next, we explore the utilization of the spatial covariance of EEG signals through alignment techniques, with the goal of learning domain-invariant representations. We introduce a Riemannian framework that concurrently performs covariance-based signal alignment and data augmentation, while training a convolutional neural network (CNN) on EEG time-series. Experiments on the BCI IV-2a dataset show that our method performs superiorly over traditional alignment, by inducing regularization to the weights of the CNN. We also study the problem of EEG-based affect recognition, inspired by works suggesting that emotions can be expressed in relative terms, i.e. through ordinal comparisons between different affective state levels. We propose treating data samples in a pairwise manner to infer the ordinal relation between their corresponding affective state labels, as an auxiliary training objective. We incorporate our objective in a deep network architecture which we jointly train on the tasks of sample-wise classification and pairwise ordinal ranking. We evaluate our method on the affective datasets of DEAP and SEED and obtain performance improvements over deep networks trained without the additional ranking objective
Evaluation of Data Processing and Artifact Removal Approaches Used for Physiological Signals Captured Using Wearable Sensing Devices during Construction Tasks
Wearable sensing devices (WSDs) have enormous promise for monitoring construction worker safety. They can track workers and send safety-related information in real time, allowing for more effective and preventative decision making. WSDs are particularly useful on construction sites since they can track workersâ health, safety, and activity levels, among other metrics that could help optimize their daily tasks. WSDs may also assist workers in recognizing health-related safety risks (such as physical fatigue) and taking appropriate action to mitigate them. The data produced by these WSDs, however, is highly noisy and contaminated with artifacts that could have been introduced by the surroundings, the experimental apparatus, or the subjectâs physiological state. These artifacts are very strong and frequently found during field experiments. So, when there is a lot of artifacts, the signal quality drops. Recently, artifacts removal has been greatly enhanced by developments in signal processing, which has vastly enhanced the performance. Thus, the proposed review aimed to provide an in-depth analysis of the approaches currently used to analyze data and remove artifacts from physiological signals obtained via WSDs during construction-related tasks. First, this study provides an overview of the physiological signals that are likely to be recorded from construction workers to monitor their health and safety. Second, this review identifies the most prevalent artifacts that have the most detrimental effect on the utility of the signals. Third, a comprehensive review of existing artifact-removal approaches were presented. Fourth, each identified artifact detection and removal approach was analyzed for its strengths and weaknesses. Finally, in conclusion, this review provides a few suggestions for future research for improving the quality of captured physiological signals for monitoring the health and safety of construction workers using artifact removal approaches
Sound Event Detection by Exploring Audio Sequence Modelling
Everyday sounds in real-world environments are a powerful source of information by which humans can interact with their environments. Humans can infer what is happening around them by listening to everyday sounds. At the same time, it is a challenging task for a computer algorithm in a smart device to automatically recognise, understand, and interpret everyday sounds. Sound event detection (SED) is the process of transcribing an audio recording into sound event tags with onset and offset time values. This involves classification and segmentation of sound events in the given audio recording. SED has numerous applications in everyday life which include security and surveillance, automation, healthcare monitoring, multimedia information retrieval, and assisted living technologies. SED is to everyday sounds what automatic speech recognition (ASR) is to speech and automatic music transcription (AMT) is to music. The fundamental questions in designing a sound recognition system are, which portion of a sound event should the system analyse, and what proportion of a sound event should the system process in order to claim a confident detection of that particular sound event. While the classification of sound events has improved a lot in recent years, it is considered that the temporal-segmentation of sound events has not improved in the same extent. The aim of this thesis is to propose and develop methods to improve the segmentation and classification of everyday sound events in SED models. In particular, this thesis explores the segmentation of sound events by investigating audio sequence encoding-based and audio sequence modelling-based methods, in an effort to improve the overall sound event detection performance. In the first phase of this thesis, efforts are put towards improving sound event detection by explicitly conditioning the audio sequence representations of an SED model using sound activity detection (SAD) and onset detection. To achieve this, we propose multi-task learning-based SED models in which SAD and onset detection are used as auxiliary tasks for the SED task. The next part of this thesis explores self-attention-based audio sequence modelling, which aggregates audio representations based on temporal relations within and between sound events, scored on the basis of the similarity of sound event portions in audio event sequences. We propose SED models that include memory-controlled, adaptive, dynamic, and source separation-induced self-attention variants, with the aim to improve overall sound recognition
- âŠ