634 research outputs found

    Inverse Modeling for MEG/EEG data

    Full text link
    We provide an overview of the state-of-the-art for mathematical methods that are used to reconstruct brain activity from neurophysiological data. After a brief introduction on the mathematics of the forward problem, we discuss standard and recently proposed regularization methods, as well as Monte Carlo techniques for Bayesian inference. We classify the inverse methods based on the underlying source model, and discuss advantages and disadvantages. Finally we describe an application to the pre-surgical evaluation of epileptic patients.Comment: 15 pages, 1 figur

    Kalman-filter-based EEG source localization

    Get PDF
    This thesis uses the Kalman filter (KF) to solve the electroencephalographic (EEG) inverse problem to image its neuronal sources. Chapter 1 introduces EEG source localization and the KF and discusses how it can solve the inverse problem. Chapter 2 introduces an EEG inverse solution using a spatially whitened KF (SWKF) to reduce the computational burden. Likelihood maximization is used to fit spatially uniform neural model parameters to simulated and clinical EEGs. The SWKF accurately reconstructs source dynamics. Filter performance is analyzed by computing the innovations’ statistical properties and identifying spatial variations in performance that could be improved by use of spatially varying parameters. Chapter 3 investigates the SWKF via one-dimensional (1D) simulations. Motivated by Chapter 2, two model parameters are given Gaussian spatial profiles to better reflect brain dynamics. Constrained optimization ensures estimated parameters have clear biophysical interpretations. Inverse solutions are also computed using the optimal linear KF. Both filters produce accurate state estimates. Spatially varying parameters are correctly identified from datasets with transient dynamics, but estimates for driven datasets are degraded by the unmodeled drive term. Chapter 4 treats the whole-brain EEG inverse problem and applies features of the 1D simulations to the SWKF of Chapter 2. Spatially varying parameters are used to model spatial variation of the alpha rhythm. The simulated EEG here exhibits wave-like patterns and spatially varying dynamics. As in Chapter 3, optimization constrains model parameters to appropriate ranges. State estimation is again reliable for simulated and clinical EEG, although spatially varying parameters do not improve accuracy and parameter estimation is unreliable, with wave velocity underestimated. Contributing factors are identified and approaches to overcome them are discussed. Chapter 5 summarizes the main findings and outlines future work

    A statistical approach to the inverse problem in magnetoencephalography

    Full text link
    Magnetoencephalography (MEG) is an imaging technique used to measure the magnetic field outside the human head produced by the electrical activity inside the brain. The MEG inverse problem, identifying the location of the electrical sources from the magnetic signal measurements, is ill-posed, that is, there are an infinite number of mathematically correct solutions. Common source localization methods assume the source does not vary with time and do not provide estimates of the variability of the fitted model. Here, we reformulate the MEG inverse problem by considering time-varying locations for the sources and their electrical moments and we model their time evolution using a state space model. Based on our predictive model, we investigate the inverse problem by finding the posterior source distribution given the multiple channels of observations at each time rather than fitting fixed source parameters. Our new model is more realistic than common models and allows us to estimate the variation of the strength, orientation and position. We propose two new Monte Carlo methods based on sequential importance sampling. Unlike the usual MCMC sampling scheme, our new methods work in this situation without needing to tune a high-dimensional transition kernel which has a very high cost. The dimensionality of the unknown parameters is extremely large and the size of the data is even larger. We use Parallel Virtual Machine (PVM) to speed up the computation.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS716 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    MEG and fMRI Fusion for Non-Linear Estimation of Neural and BOLD Signal Changes

    Get PDF
    The combined analysis of magnetoencephalography (MEG)/electroencephalography and functional magnetic resonance imaging (fMRI) measurements can lead to improvement in the description of the dynamical and spatial properties of brain activity. In this paper we empirically demonstrate this improvement using simulated and recorded task related MEG and fMRI activity. Neural activity estimates were derived using a dynamic Bayesian network with continuous real valued parameters by means of a sequential Monte Carlo technique. In synthetic data, we show that MEG and fMRI fusion improves estimation of the indirectly observed neural activity and smooths tracking of the blood oxygenation level dependent (BOLD) response. In recordings of task related neural activity the combination of MEG and fMRI produces a result with greater signal-to-noise ratio, that confirms the expectation arising from the nature of the experiment. The highly non-linear model of the BOLD response poses a difficult inference problem for neural activity estimation; computational requirements are also high due to the time and space complexity. We show that joint analysis of the data improves the system's behavior by stabilizing the differential equations system and by requiring fewer computational resources

    The impact of MEG source reconstruction method on source-space connectivity estimation: A comparison between minimum-norm solution and beamforming.

    Get PDF
    Despite numerous important contributions, the investigation of brain connectivity with magnetoencephalography (MEG) still faces multiple challenges. One critical aspect of source-level connectivity, largely overlooked in the literature, is the putative effect of the choice of the inverse method on the subsequent cortico-cortical coupling analysis. We set out to investigate the impact of three inverse methods on source coherence detection using simulated MEG data. To this end, thousands of randomly located pairs of sources were created. Several parameters were manipulated, including inter- and intra-source correlation strength, source size and spatial configuration. The simulated pairs of sources were then used to generate sensor-level MEG measurements at varying signal-to-noise ratios (SNR). Next, the source level power and coherence maps were calculated using three methods (a) L2-Minimum-Norm Estimate (MNE), (b) Linearly Constrained Minimum Variance (LCMV) beamforming, and (c) Dynamic Imaging of Coherent Sources (DICS) beamforming. The performances of the methods were evaluated using Receiver Operating Characteristic (ROC) curves. The results indicate that beamformers perform better than MNE for coherence reconstructions if the interacting cortical sources consist of point-like sources. On the other hand, MNE provides better connectivity estimation than beamformers, if the interacting sources are simulated as extended cortical patches, where each patch consists of dipoles with identical time series (high intra-patch coherence). However, the performance of the beamformers for interacting patches improves substantially if each patch of active cortex is simulated with only partly coherent time series (partial intra-patch coherence). These results demonstrate that the choice of the inverse method impacts the results of MEG source-space coherence analysis, and that the optimal choice of the inverse solution depends on the spatial and synchronization profile of the interacting cortical sources. The insights revealed here can guide method selection and help improve data interpretation regarding MEG connectivity estimation

    Sparse EEG Source Localization Using Bernoulli Laplacian Priors

    Get PDF
    International audienceSource localization in electroencephalography has received an increasing amount of interest in the last decade. Solving the underlying ill-posed inverse problem usually requires choosing an appropriate regularization. The usual l2 norm has been considered and provides solutions with low computational complexity. However, in several situations, realistic brain activity is believed to be focused in a few focal areas. In these cases, the l2 norm is known to overestimate the activated spatial areas. One solution to this problem is to promote sparse solutions for instance based on the l1 norm that are easy to handle with optimization techniques. In this paper, we consider the use of an l0 + l1 norm to enforce sparse source activity (by ensuring the solution has few nonzero elements) while regularizing the nonzero amplitudes of the solution. More precisely, the l0 pseudonorm handles the position of the non zero elements while the l1 norm constrains the values of their amplitudes. We use a Bernoulli–Laplace prior to introduce this combined l0 + l1 norm in a Bayesian framework. The proposed Bayesian model is shown to favor sparsity while jointly estimating the model hyperparameters using a Markov chain Monte Carlo sampling technique. We apply the model to both simulated and real EEG data, showing that the proposed method provides better results than the l2 and l1 norms regularizations in the presence of pointwise sources. A comparison with a recent method based on multiple sparse priors is also conducted
    • …
    corecore