10 research outputs found
Synergetic and redundant information flow detected by unnormalized Granger causality: application to resting state fMRI
Objectives: We develop a framework for the analysis of synergy and redundancy
in the pattern of information flow between subsystems of a complex network.
Methods: The presence of redundancy and/or synergy in multivariate time series
data renders difficult to estimate the neat flow of information from each
driver variable to a given target. We show that adopting an unnormalized
definition of Granger causality one may put in evidence redundant multiplets of
variables influencing the target by maximizing the total Granger causality to a
given target, over all the possible partitions of the set of driving variables.
Consequently we introduce a pairwise index of synergy which is zero when two
independent sources additively influence the future state of the system,
differently from previous definitions of synergy. Results: We report the
application of the proposed approach to resting state fMRI data from the Human
Connectome Project, showing that redundant pairs of regions arise mainly due to
space contiguity and interhemispheric symmetry, whilst synergy occurs mainly
between non-homologous pairs of regions in opposite hemispheres. Conclusions:
Redundancy and synergy, in healthy resting brains, display characteristic
patterns, revealed by the proposed approach. Significance: The pairwise synergy
index, here introduced, maps the informational character of the system at hand
into a weighted complex network: the same approach can be applied to other
complex systems whose normal state corresponds to a balance between redundant
and synergetic circuits.Comment: 6 figures. arXiv admin note: text overlap with arXiv:1403.515
Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes
Exploiting the theory of state space models, we derive the exact expressions
of the information transfer, as well as redundant and synergistic transfer, for
coupled Gaussian processes observed at multiple temporal scales. All of the
terms, constituting the frameworks known as interaction information
decomposition and partial information decomposition, can thus be analytically
obtained for different time scales from the parameters of the VAR model that
fits the processes. We report the application of the proposed methodology
firstly to benchmark Gaussian systems, showing that this class of systems may
generate patterns of information decomposition characterized by mainly
redundant or synergistic information transfer persisting across multiple time
scales or even by the alternating prevalence of redundant and synergistic
source interaction depending on the time scale. Then, we apply our method to an
important topic in neuroscience, i.e., the detection of causal interactions in
human epilepsy networks, for which we show the relevance of partial information
decomposition to the detection of multiscale information transfer spreading
from the seizure onset zone
A New Framework for Decomposing Multivariate Information
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much-criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. This thesis presents a new framework for information decomposition that is based upon the decomposition of pointwise mutual information rather than mutual information. The framework is derived in two separate ways. The first of these derivations is based upon a modified version of the original axiomatic approach taken by Williams and Beer. However, to overcome the difficulty associated with signed pointwise mutual information, the decomposition is applied separately to the unsigned entropic components of pointwise mutual information which are referred to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Based upon an operational interpretation of redundancy, measures of redundant specificity and redundant ambiguity are defined which enables one to evaluate the partial information atoms separately for each lattice. These separate atoms can then be recombined to yield the sought-after multivariate information decomposition. This framework is applied to canonical examples from the literature and the results and various properties of the decomposition are discussed. In particular, the pointwise decomposition using specificity and ambiguity is shown to satisfy a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. The second approach begins by considering the distinct ways in which two marginal observers can share their information with the non-observing individual third party. Several novel measures of information content are introduced, namely the union, intersection and unique information contents. Next, the algebraic structure of these new measures of shared marginal information is explored, and it is shown that the structure of shared marginal information is that of a distributive lattice. Furthermore, by using the fundamental theorem of distributive lattices, it is shown that these new measures are isomorphic to a ring of sets. Finally, by combining this structure together with the semi-lattice of joint information, the redundancy lattice form partial information decomposition is found to be embedded within this larger algebraic structure. However, since this structure considers information contents, it is actually equivalent to the specificity lattice from the first derivation of pointwise partial information decomposition. The thesis then closes with a discussion about whether or not one should combine the information contents from the specificity and ambiguity lattices
Stroke-related alterations in inter-areal communication
Beyond causing local ischemia and cell damage at the site of injury, stroke strongly affects long-range anatomical connections, perturbing the functional organization of brain networks. Several studies reported functional connectivity abnormalities parallelling both behavioral deficits and functional recovery across different cognitive domains. FC alterations suggest that long-range communication in the brain is altered after stroke. However, standard FC analyses cannot reveal the directionality and time scale of inter-areal information transfer. We used resting-state fMRI and covariance-based Granger causality analysis to quantify network-level information transfer and its alteration in stroke. Two main large-scale anomalies were observed in stroke patients. First, inter-hemispheric information transfer was significantly decreased with respect to healthy controls. Second, stroke caused inter-hemispheric asymmetries, as information transfer within the affected hemisphere and from the affected to the intact hemisphere was significantly reduced. Both anomalies were more prominent in resting-state networks related to attention and language, and they correlated with impaired performance in several behavioral domains. Overall, our findings support the hypothesis that stroke provokes asymmetries between the affected and spared hemisphere, with different functional consequences depending on which hemisphere is lesioned
The Identity of Information: How Deterministic Dependencies Constrain Information Synergy and Redundancy
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a partial information decomposition (PID) that separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013) proposed an identity axiom that imposes necessary conditions to quantify qualitatively common information. However, Bertschinger et al. (2012) showed that, in a counterexample with deterministic target-source dependencies, the identity axiom is incompatible with ensuring PID nonnegativity. Here, we study systematically the consequences of information identity criteria that assign identity based on associations between target and source variables resulting from deterministic dependencies. We show how these criteria are related to the identity axiom and to previously proposed redundancy measures, and we characterize how they lead to negative PID terms. This constitutes a further step to more explicitly address the role of information identity in the quantification of redundancy. The implications for studying neural coding are discussed
Synergy, redundancy and unnormalized Granger causality
We analyze by means of Granger causality the effect of synergy and redundancy in the inference (from time series data) of the information flow between subsystems of a complex network. Whilst fully conditioned Granger causality is not affected by synergy, the pairwise analysis fails to put in evidence synergetic effects. We show that maximization of the total Granger causality to a given target, over all the possible partitions of the set of driving variables, puts in evidence redundant multiplets of variables influencing the target, provided that an unnormalized definition of Granger causality is adopted. Along the same lines we also introduce a pairwise index of synergy (w.r.t. to information flow to a third variable) which is zero when two independent sources additively influence a common target; thus, this definition differs from previous definitions of synergy