3,245 research outputs found

    Overlearning in marginal distribution-based ICA: analysis and solutions

    Get PDF
    The present paper is written as a word of caution, with users of independent component analysis (ICA) in mind, to overlearning phenomena that are often observed.\\ We consider two types of overlearning, typical to high-order statistics based ICA. These algorithms can be seen to maximise the negentropy of the source estimates. The first kind of overlearning results in the generation of spike-like signals, if there are not enough samples in the data or there is a considerable amount of noise present. It is argued that, if the data has power spectrum characterised by 1/f1/f curve, we face a more severe problem, which cannot be solved inside the strict ICA model. This overlearning is better characterised by bumps instead of spikes. Both overlearning types are demonstrated in the case of artificial signals as well as magnetoencephalograms (MEG). Several methods are suggested to circumvent both types, either by making the estimation of the ICA model more robust or by including further modelling of the data

    Modeling sparse connectivity between underlying brain sources for EEG/MEG

    Full text link
    We propose a novel technique to assess functional brain connectivity in EEG/MEG signals. Our method, called Sparsely-Connected Sources Analysis (SCSA), can overcome the problem of volume conduction by modeling neural data innovatively with the following ingredients: (a) the EEG is assumed to be a linear mixture of correlated sources following a multivariate autoregressive (MVAR) model, (b) the demixing is estimated jointly with the source MVAR parameters, (c) overfitting is avoided by using the Group Lasso penalty. This approach allows to extract the appropriate level cross-talk between the extracted sources and in this manner we obtain a sparse data-driven model of functional connectivity. We demonstrate the usefulness of SCSA with simulated data, and compare to a number of existing algorithms with excellent results.Comment: 9 pages, 6 figure

    Regional coherence evaluation in mild cognitive impairment and Alzheimer's disease based on adaptively extracted magnetoencephalogram rhythms

    Get PDF
    This study assesses the connectivity alterations caused by Alzheimer's disease (AD) and mild cognitive impairment (MCI) in magnetoencephalogram (MEG) background activity. Moreover, a novel methodology to adaptively extract brain rhythms from the MEG is introduced. This methodology relies on the ability of empirical mode decomposition to isolate local signal oscillations and constrained blind source separation to extract the activity that jointly represents a subset of channels. Inter-regional MEG connectivity was analysed for 36 AD, 18 MCI and 26 control subjects in δ, θ, α and β bands over left and right central, anterior, lateral and posterior regions with magnitude squared coherence—c(f). For the sake of comparison, c(f) was calculated from the original MEG channels and from the adaptively extracted rhythms. The results indicated that AD and MCI cause slight alterations in the MEG connectivity. Computed from the extracted rhythms, c(f) distinguished AD and MCI subjects from controls with 69.4% and 77.3% accuracies, respectively, in a full leave-one-out cross-validation evaluation. These values were higher than those obtained without the proposed extraction methodology

    Dynamic Decomposition of Spatiotemporal Neural Signals

    Full text link
    Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals

    Efficient transfer entropy analysis of non-stationary neural time series

    Full text link
    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these observations, available estimators assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that deals with the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method. We test the performance and robustness of our implementation on data from simulated stochastic processes and demonstrate the method's applicability to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscientific data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON

    Volunteer studies replacing animal experiments in brain research - Report and recommendations of a Volunteers in Research and Testing workshop

    Get PDF

    Markers of criticality in phase synchronization

    Get PDF
    The concept of the brain as a critical dynamical system is very attractive because systems close to criticality are thought to maximize their dynamic range of information processing and communication. To date, there have been two key experimental observations in support of this hypothesis: (i) neuronal avalanches with power law distribution of size and (ii) long-range temporal correlations (LRTCs) in the amplitude of neural oscillations. The case for how these maximize dynamic range of information processing and communication is still being made and because a significant substrate for information coding and transmission is neural synchrony it is of interest to link synchronization measures with those of criticality. We propose a framework for characterizing criticality in synchronization based on an analysis of the moment-to-moment fluctuations of phase synchrony in terms of the presence of LRTCs. This framework relies on an estimation of the rate of change of phase difference and a set of methods we have developed to detect LRTCs. We test this framework against two classical models of criticality (Ising and Kuramoto) and recently described variants of these models aimed to more closely represent human brain dynamics. From these simulations we determine the parameters at which these systems show evidence of LRTCs in phase synchronization. We demonstrate proof of principle by analysing pairs of human simultaneous EEG and EMG time series, suggesting that LRTCs of corticomuscular phase synchronization can be detected in the resting state and experimentally manipulated. The existence of LRTCs in fluctuations of phase synchronization suggests that these fluctuations are governed by non-local behavior, with all scales contributing to system behavior. This has important implications regarding the conditions under which one should expect to see LRTCs in phase synchronization. Specifically, brain resting states may exhibit LRTCs reflecting a state of readiness facilitating rapid task-dependent shifts toward and away from synchronous states that abolish LRTCs
    • …
    corecore