5,527 research outputs found

    Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes

    Get PDF
    Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information decomposition and partial information decomposition, can thus be analytically obtained for different time scales from the parameters of the VAR model that fits the processes. We report the application of the proposed methodology firstly to benchmark Gaussian systems, showing that this class of systems may generate patterns of information decomposition characterized by mainly redundant or synergistic information transfer persisting across multiple time scales or even by the alternating prevalence of redundant and synergistic source interaction depending on the time scale. Then, we apply our method to an important topic in neuroscience, i.e., the detection of causal interactions in human epilepsy networks, for which we show the relevance of partial information decomposition to the detection of multiscale information transfer spreading from the seizure onset zone

    Quantifying information transfer and mediation along causal pathways in complex systems

    Get PDF
    Measures of information transfer have become a popular approach to analyze interactions in complex systems such as the Earth or the human brain from measured time series. Recent work has focused on causal definitions of information transfer excluding effects of common drivers and indirect influences. While the former clearly constitutes a spurious causality, the aim of the present article is to develop measures quantifying different notions of the strength of information transfer along indirect causal paths, based on first reconstructing the multivariate causal network (\emph{Tigramite} approach). Another class of novel measures quantifies to what extent different intermediate processes on causal paths contribute to an interaction mechanism to determine pathways of causal information transfer. A rigorous mathematical framework allows for a clear information-theoretic interpretation that can also be related to the underlying dynamics as proven for certain classes of processes. Generally, however, estimates of information transfer remain hard to interpret for nonlinearly intertwined complex systems. But, if experiments or mathematical models are not available, measuring pathways of information transfer within the causal dependency structure allows at least for an abstraction of the dynamics. The measures are illustrated on a climatological example to disentangle pathways of atmospheric flow over Europe.Comment: 20 pages, 6 figure

    Quantifying information transfer and mediation along causal pathways in complex systems

    Get PDF
    Measures of information transfer have become a popular approach to analyze interactions in complex systems such as the Earth or the human brain from measured time series. Recent work has focused on causal definitions of information transfer aimed at decompositions of predictive information about a target variable, while excluding effects of common drivers and indirect influences. While common drivers clearly constitute a spurious causality, the aim of the present article is to develop measures quantifying different notions of the strength of information transfer along indirect causal paths, based on first reconstructing the multivariate causal network. Another class of novel measures quantifies to what extent different intermediate processes on causal paths contribute to an interaction mechanism to determine pathways of causal information transfer. The proposed framework complements predictive decomposition schemes by focusing more on the interaction mechanism between multiple processes. A rigorous mathematical framework allows for a clear information-theoretic interpretation that can also be related to the underlying dynamics as proven for certain classes of processes. Generally, however, estimates of information transfer remain hard to interpret for nonlinearly intertwined complex systems. But if experiments or mathematical models are not available, then measuring pathways of information transfer within the causal dependency structure allows at least for an abstraction of the dynamics. The measures are illustrated on a climatological example to disentangle pathways of atmospheric flow over Europe

    Information-theoretic causality and applications to turbulence: energy cascade and inner/outer layer interactions

    Full text link
    We introduce an information-theoretic method for quantifying causality in chaotic systems. The approach, referred to as IT-causality, quantifies causality by measuring the information gained about future events conditioned on the knowledge of past events. The causal interactions are classified into redundant, unique, and synergistic contributions depending on their nature. The formulation is non-intrusive, invariance under invertible transformations of the variables, and provides the missing causality due to unobserved variables. The method only requires pairs of past-future events of the quantities of interest, making it convenient for both computational simulations and experimental investigations. IT-causality is validated in four scenarios representing basic causal interactions among variables: mediator, confounder, redundant collider, and synergistic collider. The approach is leveraged to address two questions relevant to turbulence research: i) the scale locality of the energy cascade in isotropic turbulence, and ii) the interactions between inner and outer layer flow motions in wall-bounded turbulence. In the former case, we demonstrate that causality in the energy cascade flows sequentially from larger to smaller scales without requiring intermediate scales. Conversely, the flow of information from small to large scales is shown to be redundant. In the second problem, we observe a unidirectional causality flow, with causality predominantly originating from the outer layer and propagating towards the inner layer, but not vice versa. The decomposition of IT-causality into intensities also reveals that the causality is primarily associated with high-velocity streaks

    Bits from Biology for Computational Intelligence

    Get PDF
    Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify the algorithms run by such systems and the information they represent. Algorithms and representations identified information-theoretically may then guide the design of biologically inspired computing systems (BICS). The material covered includes the necessary introduction to information theory and the estimation of information theoretic quantities from neural data. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely, or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is decomposed into component processes of information storage, transfer, and modification -- locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems

    Higher-order mutual information reveals synergistic sub-networks for multi-neuron importance

    Full text link
    Quantifying which neurons are important with respect to the classification decision of a trained neural network is essential for understanding their inner workings. Previous work primarily attributed importance to individual neurons. In this work, we study which groups of neurons contain synergistic or redundant information using a multivariate mutual information method called the O-information. We observe the first layer is dominated by redundancy suggesting general shared features (i.e. detecting edges) while the last layer is dominated by synergy indicating local class-specific features (i.e. concepts). Finally, we show the O-information can be used for multi-neuron importance. This can be demonstrated by re-training a synergistic sub-network, which results in a minimal change in performance. These results suggest our method can be used for pruning and unsupervised representation learning.Comment: Paper presented at InfoCog @ NeurIPS 202
    • …
    corecore