186 research outputs found
Covariance-domain Dictionary Learning for Overcomplete EEG Source Identification
We propose an algorithm targeting the identification of more sources than
channels for electroencephalography (EEG). Our overcomplete source
identification algorithm, Cov-DL, leverages dictionary learning methods applied
in the covariance-domain. Assuming that EEG sources are uncorrelated within
moving time-windows and the scalp mixing is linear, the forward problem can be
transferred to the covariance domain which has higher dimensionality than the
original EEG channel domain. This allows for learning the overcomplete mixing
matrix that generates the scalp EEG even when there may be more sources than
sensors active at any time segment, i.e. when there are non-sparse sources.
This is contrary to straight-forward dictionary learning methods that are based
on the assumption of sparsity, which is not a satisfied condition in the case
of low-density EEG systems. We present two different learning strategies for
Cov-DL, determined by the size of the target mixing matrix. We demonstrate that
Cov-DL outperforms existing overcomplete ICA algorithms under various scenarios
of EEG simulations and real EEG experiments
ICLabel: An automated electroencephalographic independent component classifier, dataset, and website
The electroencephalogram (EEG) provides a non-invasive, minimally
restrictive, and relatively low cost measure of mesoscale brain dynamics with
high temporal resolution. Although signals recorded in parallel by multiple,
near-adjacent EEG scalp electrode channels are highly-correlated and combine
signals from many different sources, biological and non-biological, independent
component analysis (ICA) has been shown to isolate the various source generator
processes underlying those recordings. Independent components (IC) found by ICA
decomposition can be manually inspected, selected, and interpreted, but doing
so requires both time and practice as ICs have no particular order or intrinsic
interpretations and therefore require further study of their properties.
Alternatively, sufficiently-accurate automated IC classifiers can be used to
classify ICs into broad source categories, speeding the analysis of EEG studies
with many subjects and enabling the use of ICA decomposition in near-real-time
applications. While many such classifiers have been proposed recently, this
work presents the ICLabel project comprised of (1) an IC dataset containing
spatiotemporal measures for over 200,000 ICs from more than 6,000 EEG
recordings, (2) a website for collecting crowdsourced IC labels and educating
EEG researchers and practitioners about IC interpretation, and (3) the
automated ICLabel classifier. The classifier improves upon existing methods in
two ways: by improving the accuracy of the computed label estimates and by
enhancing its computational efficiency. The ICLabel classifier outperforms or
performs comparably to the previous best publicly available method for all
measured IC categories while computing those labels ten times faster than that
classifier as shown in a rigorous comparison against all other publicly
available EEG IC classifiers.Comment: Intended for NeuroImage. Updated from version one with minor
editorial and figure change
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been
demonstrated to perform efficiently in a variety of applications, such as
dimensionality reduction, feature learning, and classification. Their
implementation on neuromorphic hardware platforms emulating large-scale
networks of spiking neurons can have significant advantages from the
perspectives of scalability, power dissipation and real-time interfacing with
the environment. However the traditional RBM architecture and the commonly used
training algorithm known as Contrastive Divergence (CD) are based on discrete
updates and exact arithmetics which do not directly map onto a dynamical neural
substrate. Here, we present an event-driven variation of CD to train a RBM
constructed with Integrate & Fire (I&F) neurons, that is constrained by the
limitations of existing and near future neuromorphic hardware platforms. Our
strategy is based on neural sampling, which allows us to synthesize a spiking
neural network that samples from a target Boltzmann distribution. The recurrent
activity of the network replaces the discrete steps of the CD algorithm, while
Spike Time Dependent Plasticity (STDP) carries out the weight updates in an
online, asynchronous fashion. We demonstrate our approach by training an RBM
composed of leaky I&F neurons with STDP synapses to learn a generative model of
the MNIST hand-written digit dataset, and by testing it in recognition,
generation and cue integration tasks. Our results contribute to a machine
learning-driven approach for synthesizing networks of spiking neurons capable
of carrying out practical, high-level functionality.Comment: (Under review
- …