2,161 research outputs found
ICLabel: An automated electroencephalographic independent component classifier, dataset, and website
The electroencephalogram (EEG) provides a non-invasive, minimally
restrictive, and relatively low cost measure of mesoscale brain dynamics with
high temporal resolution. Although signals recorded in parallel by multiple,
near-adjacent EEG scalp electrode channels are highly-correlated and combine
signals from many different sources, biological and non-biological, independent
component analysis (ICA) has been shown to isolate the various source generator
processes underlying those recordings. Independent components (IC) found by ICA
decomposition can be manually inspected, selected, and interpreted, but doing
so requires both time and practice as ICs have no particular order or intrinsic
interpretations and therefore require further study of their properties.
Alternatively, sufficiently-accurate automated IC classifiers can be used to
classify ICs into broad source categories, speeding the analysis of EEG studies
with many subjects and enabling the use of ICA decomposition in near-real-time
applications. While many such classifiers have been proposed recently, this
work presents the ICLabel project comprised of (1) an IC dataset containing
spatiotemporal measures for over 200,000 ICs from more than 6,000 EEG
recordings, (2) a website for collecting crowdsourced IC labels and educating
EEG researchers and practitioners about IC interpretation, and (3) the
automated ICLabel classifier. The classifier improves upon existing methods in
two ways: by improving the accuracy of the computed label estimates and by
enhancing its computational efficiency. The ICLabel classifier outperforms or
performs comparably to the previous best publicly available method for all
measured IC categories while computing those labels ten times faster than that
classifier as shown in a rigorous comparison against all other publicly
available EEG IC classifiers.Comment: Intended for NeuroImage. Updated from version one with minor
editorial and figure change
Design and Analysis of a True Random Number Generator Based on GSR Signals for Body Sensor Networks
This article belongs to the Section Internet of ThingsToday, medical equipment or general-purpose devices such as smart-watches or smart-textiles can acquire a person's vital signs. Regardless of the type of device and its purpose, they are all equipped with one or more sensors and often have wireless connectivity. Due to the transmission of sensitive data through the insecure radio channel and the need to ensure exclusive access to authorised entities, security mechanisms and cryptographic primitives must be incorporated onboard these devices. Random number generators are one such necessary cryptographic primitive. Motivated by this, we propose a True Random Number Generator (TRNG) that makes use of the GSR signal measured by a sensor on the body. After an exhaustive analysis of both the entropy source and the randomness of the output, we can conclude that the output generated by the proposed TRNG behaves as that produced by a random variable. Besides, and in comparison with the previous proposals, the performance offered is much higher than that of the earlier works.This work was supported by the Spanish Ministry of Economy and Competitiveness under the contract ESP-2015-68245-C4-1-P, by the MINECO grant TIN2016-79095-C2-2-R (SMOG-DEV), and by the Comunidad de Madrid (Spain) under the project CYNAMON (P2018/TCS-4566), co-financed by European Structural Funds (ESF and FEDER). This research was also supported by the Interdisciplinary Research Funds (HTC, United Arab Emirates) under the grant No. 103104
The Sound of Soul: Biofeedback Controlled Music Generation and Sound Design
The aim of this project is to develop a system that allows biofeedback to be used as a creative musical tool through music generation and sound design. This system uses brainwave data from an electroencephalogram, as well as electromyography muscle activity. This biofeedback is then interpreted by a machine learning neural network which can be trained to classify the user’s psychological or physiological state. This determination will then be used to control a generative Artificial Intelligent MIDI generator or other MIDI Continuous Controller signals. The raw biofeedback, averaged brainwave amplitudes, and Fast Fourier Transform of the EEG signal can also be used to control MIDI generation and Digital Audio Workstation parameters directly, and for synthesizer generation and timbral control.https://remix.berklee.edu/graduate-studies-production-technology/1280/thumbnail.jp
Self-supervised Learning for Electroencephalogram: A Systematic Survey
Electroencephalogram (EEG) is a non-invasive technique to record
bioelectrical signals. Integrating supervised deep learning techniques with EEG
signals has recently facilitated automatic analysis across diverse EEG-based
tasks. However, the label issues of EEG signals have constrained the
development of EEG-based deep models. Obtaining EEG annotations is difficult
that requires domain experts to guide collection and labeling, and the
variability of EEG signals among different subjects causes significant label
shifts. To solve the above challenges, self-supervised learning (SSL) has been
proposed to extract representations from unlabeled samples through
well-designed pretext tasks. This paper concentrates on integrating SSL
frameworks with temporal EEG signals to achieve efficient representation and
proposes a systematic review of the SSL for EEG signals. In this paper, 1) we
introduce the concept and theory of self-supervised learning and typical SSL
frameworks. 2) We provide a comprehensive review of SSL for EEG analysis,
including taxonomy, methodology, and technique details of the existing
EEG-based SSL frameworks, and discuss the difference between these methods. 3)
We investigate the adaptation of the SSL approach to various downstream tasks,
including the task description and related benchmark datasets. 4) Finally, we
discuss the potential directions for future SSL-EEG research.Comment: 35 pages, 12 figure
Generating Visual Stimuli from EEG Recordings using Transformer-encoder based EEG encoder and GAN
In this study, we tackle a modern research challenge within the field of
perceptual brain decoding, which revolves around synthesizing images from EEG
signals using an adversarial deep learning framework. The specific objective is
to recreate images belonging to various object categories by leveraging EEG
recordings obtained while subjects view those images. To achieve this, we
employ a Transformer-encoder based EEG encoder to produce EEG encodings, which
serve as inputs to the generator component of the GAN network. Alongside the
adversarial loss, we also incorporate perceptual loss to enhance the quality of
the generated images
A Dynamical Systems Approach to Characterizing Brain–Body Interactions during Movement: Challenges, Interpretations, and Recommendations
Brain–body interactions (BBIs) have been the focus of intense scrutiny since the inception of the scientific method, playing a foundational role in the earliest debates over the philosophy of science. Contemporary investigations of BBIs to elucidate the neural principles of motor control have benefited from advances in neuroimaging, device engineering, and signal processing. However, these studies generally suffer from two major limitations. First, they rely on interpretations of ‘brain’ activity that are behavioral in nature, rather than neuroanatomical or biophysical. Second, they employ methodological approaches that are inconsistent with a dynamical systems approach to neuromotor control. These limitations represent a fundamental challenge to the use of BBIs for answering basic and applied research questions in neuroimaging and neurorehabilitation. Thus, this review is written as a tutorial to address both limitations for those interested in studying BBIs through a dynamical systems lens. First, we outline current best practices for acquiring, interpreting, and cleaning scalp-measured electroencephalography (EEG) acquired during whole-body movement. Second, we discuss historical and current theories for modeling EEG and kinematic data as dynamical systems. Third, we provide worked examples from both canonical model systems and from empirical EEG and kinematic data collected from two subjects during an overground walking task
Monitoring the Depth of Anaesthesia
One of the current challenges in medicine is monitoring the patients’ depth of general anaesthesia (DGA). Accurate assessment of the depth of anaesthesia contributes to tailoring drug administration to the individual patient, thus preventing awareness or excessive anaesthetic depth and improving patients’ outcomes. In the past decade, there has been a significant increase in the number of studies on the development, comparison and validation of commercial devices that estimate the DGA by analyzing electrical activity of the brain (i.e., evoked potentials or brain waves). In this paper we review the most frequently used sensors and mathematical methods for monitoring the DGA, their validation in clinical practice and discuss the central question of whether these approaches can, compared to other conventional methods, reduce the risk of patient awareness during surgical procedures
SimBCI-A framework for studying BCI methods by simulated EEG
International audienceBrain-computer interface (BCI) methods are commonly studied using electroencephalogram (EEG) data recorded from human experiments. For understanding and developing BCI signal processing techniques, real data is costly to obtain and its composition is a priori unknown. The brain mechanisms generating the EEG are not directly observable and their states cannot be uniquely identified from the EEG. Subsequently, we do not have generative ground truth for real data. In this paper, we propose a novel convenience framework called simBCI to alleviate testing and studying BCI signal processing methods in simulated, controlled conditions. The framework can be used to generate artificial BCI data and to test classification pipelines with such data. Models and parameters on both data generation and the signal processing side can be iterated over to examine the interplay of different combinations. The framework provides the first time open source implementations of several models and methods. We invite researchers to insert more advanced models. The proposed system does not intend to replace human experiments. Instead, it can be used to discover hypotheses, study algorithms, educate about BCI, and debug signal processing pipelines of other BCI systems. The proposed framework is modular, extensible, and freely available as open source. 1 It currently requires MATLAB
- …