29 research outputs found

    Spatiotemporal information transfer pattern differences in motor selection

    Get PDF
    Analysis of information transfer between variables in brain images is currently a popular topic, e.g. [1]. Such work typically focuses on average information transfer (i.e. transfer entropy [2]), yet the dynamics of transfer from a source to a destination can also be quantified at individual time points using the local transfer entropy (TE) [3]. This local perspective is known to reveal dynamical structure that the average cannot. We present a method to quantify local TE values in time between source and destination regions of variables in brain-imaging data, combining: a. computation of inter-regional transfer between two regions of variables (e.g. voxels) [1], with b. the local perspective of the dynamics of such transfer in time [3]. Transfer is computed over samples from all variables – there is no training in or subset selection of variables to use. We apply this method to a set of fMRI measurements where we could expect to see differences in local information transfer between two conditions at specific time steps. The fMRI data set analyzed (from [4]) contains brain activity recorded from 7 localized regions while 12 subjects (who gave informed written consent) were asked to freely decide whether to pus

    Transfer Entropy as a Log-likelihood Ratio

    Full text link
    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic chi-squared distribution is established for the transfer entropy estimator. The result generalises the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense

    Discrimination of Mild Cognitive Impairment and Alzheimer\u27s Disease Using Transfer Entropy Measures of Scalp EEG

    Get PDF
    Mild cognitive impairment (MCI) is a neurological condition related to early stages of dementia including Alzheimer\u27s disease (AD). This study investigates the potential of measures of transfer entropy in scalp EEG for effectively discriminating between normal aging, MCI, and AD participants. Resting EEG records from 48 age-matched participants (mean age 75.7 years)-15 normal controls, 16 MCI, and 17 early AD-are examined. The mean temporal delays corresponding to peaks in inter-regional transfer entropy are computed and used as features to discriminate between the three groups of participants. Three-way classification schemes based on binary support vector machine models demonstrate overall discrimination accuracies of 91.7- 93.8%, depending on the protocol condition. These results demonstrate the potential for EEG transfer entropy measures as biomarkers in identifying early MCI and AD. Moreover, the analyses based on short data segments (two minutes) render the method practical for a primary care setting

    Typical and aberrant functional brain flexibility: lifespan development and aberrant organization in traumatic brain injury and dyslexia

    Get PDF
    Intrinsic functional connectivity networks derived from different neuroimaging methods and connectivity estimators have revealed robust developmental trends linked to behavioural and cognitive maturation. The present study employed a dynamic functional connectivity approach to determine dominant intrinsic coupling modes in resting-state neuromagnetic data from 178 healthy participants aged 8–60 years. Results revealed significant developmental trends in three types of dominant intra- and inter-hemispheric neuronal population interactions (amplitude envelope, phase coupling, and phase-amplitude synchronization) involving frontal, temporal, and parieto-occipital regions. Multi-class support vector machines achieved 89% correct classification of participants according to their chronological age using dynamic functional connectivity indices. Moreover, systematic temporal variability in functional connectivity profiles, which was used to empirically derive a composite flexibility index, displayed an inverse U-shaped curve among healthy participants. Lower flexibility values were found among age-matched children with reading disability and adults who had suffered mild traumatic brain injury. The importance of these results for normal and abnormal brain development are discussed in light of the recently proposed role of cross-frequency interactions in the fine-grained coordination of neuronal population activity

    Multivariate information-theoretic measures reveal directed information structure and task relevant changes in fMRI connectivity

    No full text
    The human brain undertakes highly sophisticated information processing facilitated by the interaction between its sub-regions. We present a novel method for interregional connectivity analysis, using multivariate extensions to the mutual information and transfer entropy. The method allows us to identify the underlying directed information structure between brain regions, and how that structure changes according to behavioral conditions. This method is distinguished in using asymmetric, multivariate, information-theoretical analysis, which captures not only directional and non-linear relationships, but also collective interactions. Importantly, the method is able to estimate multivariate information measures with only relatively little data. We demonstrate the method to analyze functional magnetic resonance imaging time series to establish the directed information structure between brain regions involved in a visuo-motor tracking task. Importantly, this results in a tiered structure, with known movement planning regions driving visual and motor control regions. Also, we examine the changes in this structure as the difficulty of the tracking task is increased. We find that task difficulty modulates the coupling strength between regions of a cortical network involved in movement planning and between motor cortex and the cerebellum which is involved in the fine-tuning of motor control. It is likely these methods will find utility in identifying interregional structure (and experimentally induced changes in this structure) in other cognitive tasks and data modalities

    A framework for the local information dynamics of distributed computation in complex systems

    Full text link
    The nature of distributed computation has often been described in terms of the component operations of universal computation: information storage, transfer and modification. We review the first complete framework that quantifies each of these individual information dynamics on a local scale within a system, and describes the manner in which they interact to create non-trivial computation where "the whole is greater than the sum of the parts". We describe the application of the framework to cellular automata, a simple yet powerful model of distributed computation. This is an important application, because the framework is the first to provide quantitative evidence for several important conjectures about distributed computation in cellular automata: that blinkers embody information storage, particles are information transfer agents, and particle collisions are information modification events. The framework is also shown to contrast the computations conducted by several well-known cellular automata, highlighting the importance of information coherence in complex computation. The results reviewed here provide important quantitative insights into the fundamental nature of distributed computation and the dynamics of complex systems, as well as impetus for the framework to be applied to the analysis and design of other systems.Comment: 44 pages, 8 figure
    corecore