11,093 research outputs found
An introduction to time-resolved decoding analysis for M/EEG
The human brain is constantly processing and integrating information in order
to make decisions and interact with the world, for tasks from recognizing a
familiar face to playing a game of tennis. These complex cognitive processes
require communication between large populations of neurons. The non-invasive
neuroimaging methods of electroencephalography (EEG) and magnetoencephalography
(MEG) provide population measures of neural activity with millisecond precision
that allow us to study the temporal dynamics of cognitive processes. However,
multi-sensor M/EEG data is inherently high dimensional, making it difficult to
parse important signal from noise. Multivariate pattern analysis (MVPA) or
"decoding" methods offer vast potential for understanding high-dimensional
M/EEG neural data. MVPA can be used to distinguish between different conditions
and map the time courses of various neural processes, from basic sensory
processing to high-level cognitive processes. In this chapter, we discuss the
practical aspects of performing decoding analyses on M/EEG data as well as the
limitations of the method, and then we discuss some applications for
understanding representational dynamics in the human brain
A large-scale evaluation framework for EEG deep learning architectures
EEG is the most common signal source for noninvasive BCI applications. For
such applications, the EEG signal needs to be decoded and translated into
appropriate actions. A recently emerging EEG decoding approach is deep learning
with Convolutional or Recurrent Neural Networks (CNNs, RNNs) with many
different architectures already published. Here we present a novel framework
for the large-scale evaluation of different deep-learning architectures on
different EEG datasets. This framework comprises (i) a collection of EEG
datasets currently including 100 examples (recording sessions) from six
different classification problems, (ii) a collection of different EEG decoding
algorithms, and (iii) a wrapper linking the decoders to the data as well as
handling structured documentation of all settings and (hyper-) parameters and
statistics, designed to ensure transparency and reproducibility. As an
applications example we used our framework by comparing three publicly
available CNN architectures: the Braindecode Deep4 ConvNet, Braindecode Shallow
ConvNet, and two versions of EEGNet. We also show how our framework can be used
to study similarities and differences in the performance of different decoding
methods across tasks. We argue that the deep learning EEG framework as
described here could help to tap the full potential of deep learning for BCI
applications.Comment: 7 pages, 3 figures, final version accepted for presentation at IEEE
SMC 2018 conferenc
Neural overlap of L1 and L2 semantic representations across visual and auditory modalities : a decoding approach/
This study investigated whether brain activity in Dutch-French bilinguals during semantic access to concepts from one language could be used to predict neural activation during access to the same concepts from another language, in different language modalities/tasks. This was tested using multi-voxel pattern analysis (MVPA), within and across language comprehension (word listening and word reading) and production (picture naming). It was possible to identify the picture or word named, read or heard in one language (e.g. maan, meaning moon) based on the brain activity in a distributed bilateral brain network while, respectively, naming, reading or listening to the picture or word in the other language (e.g. lune). The brain regions identified differed across tasks. During picture naming, brain activation in the occipital and temporal regions allowed concepts to be predicted across languages. During word listening and word reading, across-language predictions were observed in the rolandic operculum and several motor-related areas (pre- and postcentral, the cerebellum). In addition, across-language predictions during reading were identified in regions typically associated with semantic processing (left inferior frontal, middle temporal cortex, right cerebellum and precuneus) and visual processing (inferior and middle occipital regions and calcarine sulcus). Furthermore, across modalities and languages, the left lingual gyrus showed semantic overlap across production and word reading. These findings support the idea of at least partially language- and modality-independent semantic neural representations
Parametric Representation of Tactile Numerosity in Working Memory
Estimated numerosity perception is processed in an approximate number system (ANS) that resembles the perception of a continuous magnitude. The ANS consists of a right lateralized frontoparietal network comprising the lateral prefrontal cortex (LPFC) and the intraparietal sulcus. Although the ANS has been extensively investigated, only a few studies have focused on the mental representation of retained numerosity estimates. Specifically, the underlying mechanisms of estimated numerosity working memory (WM) is unclear. Besides numerosities, as another form of abstract quantity, vibrotactile WM studies provide initial evidence that the right LPFC takes a central role in maintaining magnitudes. In the present fMRI multivariate pattern analysis study, we designed a delayed match-to-numerosity paradigm to test what brain regions retain approximate numerosity memoranda. In line with parametric WM results, our study found numerosity-specific WM representations in the right LPFC as well as in the supplementary motor area and the left premotor cortex extending into the superior frontal gyrus, thus bridging the gap in abstract quantity WM literature
Assessing the feasibility of online SSVEP decoding in human walking using a consumer EEG headset.
BackgroundBridging the gap between laboratory brain-computer interface (BCI) demonstrations and real-life applications has gained increasing attention nowadays in translational neuroscience. An urgent need is to explore the feasibility of using a low-cost, ease-of-use electroencephalogram (EEG) headset for monitoring individuals' EEG signals in their natural head/body positions and movements. This study aimed to assess the feasibility of using a consumer-level EEG headset to realize an online steady-state visual-evoked potential (SSVEP)-based BCI during human walking.MethodsThis study adopted a 14-channel Emotiv EEG headset to implement a four-target online SSVEP decoding system, and included treadmill walking at the speeds of 0.45, 0.89, and 1.34 meters per second (m/s) to initiate the walking locomotion. Seventeen participants were instructed to perform the online BCI tasks while standing or walking on the treadmill. To maintain a constant viewing distance to the visual targets, participants held the hand-grip of the treadmill during the experiment. Along with online BCI performance, the concurrent SSVEP signals were recorded for offline assessment.ResultsDespite walking-related attenuation of SSVEPs, the online BCI obtained an information transfer rate (ITR) over 12 bits/min during slow walking (below 0.89 m/s).ConclusionsSSVEP-based BCI systems are deployable to users in treadmill walking that mimics natural walking rather than in highly-controlled laboratory settings. This study considerably promotes the use of a consumer-level EEG headset towards the real-life BCI applications
- …