3,325 research outputs found
Neural Connectivity with Hidden Gaussian Graphical State-Model
The noninvasive procedures for neural connectivity are under questioning.
Theoretical models sustain that the electromagnetic field registered at
external sensors is elicited by currents at neural space. Nevertheless, what we
observe at the sensor space is a superposition of projected fields, from the
whole gray-matter. This is the reason for a major pitfall of noninvasive
Electrophysiology methods: distorted reconstruction of neural activity and its
connectivity or leakage. It has been proven that current methods produce
incorrect connectomes. Somewhat related to the incorrect connectivity
modelling, they disregard either Systems Theory and Bayesian Information
Theory. We introduce a new formalism that attains for it, Hidden Gaussian
Graphical State-Model (HIGGS). A neural Gaussian Graphical Model (GGM) hidden
by the observation equation of Magneto-encephalographic (MEEG) signals. HIGGS
is equivalent to a frequency domain Linear State Space Model (LSSM) but with
sparse connectivity prior. The mathematical contribution here is the theory for
high-dimensional and frequency-domain HIGGS solvers. We demonstrate that HIGGS
can attenuate the leakage effect in the most critical case: the distortion EEG
signal due to head volume conduction heterogeneities. Its application in EEG is
illustrated with retrieved connectivity patterns from human Steady State Visual
Evoked Potentials (SSVEP). We provide for the first time confirmatory evidence
for noninvasive procedures of neural connectivity: concurrent EEG and
Electrocorticography (ECoG) recordings on monkey. Open source packages are
freely available online, to reproduce the results presented in this paper and
to analyze external MEEG databases
Dynamic Decomposition of Spatiotemporal Neural Signals
Neural signals are characterized by rich temporal and spatiotemporal dynamics
that reflect the organization of cortical networks. Theoretical research has
shown how neural networks can operate at different dynamic ranges that
correspond to specific types of information processing. Here we present a data
analysis framework that uses a linearized model of these dynamic states in
order to decompose the measured neural signal into a series of components that
capture both rhythmic and non-rhythmic neural activity. The method is based on
stochastic differential equations and Gaussian process regression. Through
computer simulations and analysis of magnetoencephalographic data, we
demonstrate the efficacy of the method in identifying meaningful modulations of
oscillatory signals corrupted by structured temporal and spatiotemporal noise.
These results suggest that the method is particularly suitable for the analysis
and interpretation of complex temporal and spatiotemporal neural signals
Tensor Analysis and Fusion of Multimodal Brain Images
Current high-throughput data acquisition technologies probe dynamical systems
with different imaging modalities, generating massive data sets at different
spatial and temporal resolutions posing challenging problems in multimodal data
fusion. A case in point is the attempt to parse out the brain structures and
networks that underpin human cognitive processes by analysis of different
neuroimaging modalities (functional MRI, EEG, NIRS etc.). We emphasize that the
multimodal, multi-scale nature of neuroimaging data is well reflected by a
multi-way (tensor) structure where the underlying processes can be summarized
by a relatively small number of components or "atoms". We introduce
Markov-Penrose diagrams - an integration of Bayesian DAG and tensor network
notation in order to analyze these models. These diagrams not only clarify
matrix and tensor EEG and fMRI time/frequency analysis and inverse problems,
but also help understand multimodal fusion via Multiway Partial Least Squares
and Coupled Matrix-Tensor Factorization. We show here, for the first time, that
Granger causal analysis of brain networks is a tensor regression problem, thus
allowing the atomic decomposition of brain networks. Analysis of EEG and fMRI
recordings shows the potential of the methods and suggests their use in other
scientific domains.Comment: 23 pages, 15 figures, submitted to Proceedings of the IEE
Development of a Group Dynamic Functional Connectivity Pipeline for Magnetoencephalography Data and its Application to the Human Face Processing Network
Since its inception, functional neuroimaging has focused on identifying sources of neural activity. Recently, interest has turned to the analysis of connectivity between neural sources in dynamic brain networks. This new interest calls for the development of appropriate investigative techniques. A problem occurs in connectivity studies when the differing networks of individually analyzed subjects must be reconciled. One solution, the estimation of group models, has become common in fMRI, but is largely untried with electromagnetic data. Additionally, the assumption of stationarity has crept into the field, precluding the analysis of dynamic systems. Group extensions are applied to the sparse irMxNE localizer of MNE-Python. Spectral estimation requires individual source trials, and a multivariate multiple regression procedure is established to accomplish this based on the irMxNE output. A program based on the Fieldtrip software is created to estimate conditional Granger causality spectra in the time-frequency domain based on these trials. End-to-end simulations support the correctness of the pipeline with single and multiple subjects. Group-irMxNE makes no attempt to generalize a solution between subjects with clearly distinct patterns of source connectivity, but shows signs of doing so when subjects patterns of activity are similar. The pipeline is applied to MEG data from the facial emotion protocol in an attempt to validate the Adolphs model. Both irMxNE and Group-irMxNE place numerous sources during post-stimulus periods of high evoked power but neglect those of low power. This identifies a conflict between power-based localizations and information-centric processing models. It is also noted that neural processing is more diffuse than the neatly specified Adolphs model indicates. Individual and group results generally support early processing in the occipital, parietal, and temporal regions, but later stage frontal localizations are missing. The morphing of individual subjects\u27 brain topology to a common source-space is currently inoperable in MNE. MEG data is therefore co-registered directly onto an average brain, resulting in loss of accuracy. For this as well as reasons related to uneven power and computational limitations, the early stages of the Adolphs model are only generally validated. Encouraging results indicate that actual non-stationary group connectivity estimates are produced however
The Surface Laplacian Technique in EEG: Theory and Methods
This paper reviews the method of surface Laplacian differentiation to study
EEG. We focus on topics that are helpful for a clear understanding of the
underlying concepts and its efficient implementation, which is especially
important for EEG researchers unfamiliar with the technique. The popular
methods of finite difference and splines are reviewed in detail. The former has
the advantage of simplicity and low computational cost, but its estimates are
prone to a variety of errors due to discretization. The latter eliminates all
issues related to discretization and incorporates a regularization mechanism to
reduce spatial noise, but at the cost of increasing mathematical and
computational complexity. These and several others issues deserving further
development are highlighted, some of which we address to the extent possible.
Here we develop a set of discrete approximations for Laplacian estimates at
peripheral electrodes and a possible solution to the problem of multiple-frame
regularization. We also provide the mathematical details of finite difference
approximations that are missing in the literature, and discuss the problem of
computational performance, which is particularly important in the context of
EEG splines where data sets can be very large. Along this line, the matrix
representation of the surface Laplacian operator is carefully discussed and
some figures are given illustrating the advantages of this approach. In the
final remarks, we briefly sketch a possible way to incorporate finite-size
electrodes into Laplacian estimates that could guide further developments.Comment: 43 pages, 8 figure
- …