1,052 research outputs found

    Efficient transfer entropy analysis of non-stationary neural time series

    Full text link
    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these observations, available estimators assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that deals with the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method. We test the performance and robustness of our implementation on data from simulated stochastic processes and demonstrate the method's applicability to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscientific data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON

    Causal conditioning and instantaneous coupling in causality graphs

    Full text link
    The paper investigates the link between Granger causality graphs recently formalized by Eichler and directed information theory developed by Massey and Kramer. We particularly insist on the implication of two notions of causality that may occur in physical systems. It is well accepted that dynamical causality is assessed by the conditional transfer entropy, a measure appearing naturally as a part of directed information. Surprisingly the notion of instantaneous causality is often overlooked, even if it was clearly understood in early works. In the bivariate case, instantaneous coupling is measured adequately by the instantaneous information exchange, a measure that supplements the transfer entropy in the decomposition of directed information. In this paper, the focus is put on the multivariate case and conditional graph modeling issues. In this framework, we show that the decomposition of directed information into the sum of transfer entropy and information exchange does not hold anymore. Nevertheless, the discussion allows to put forward the two measures as pillars for the inference of causality graphs. We illustrate this on two synthetic examples which allow us to discuss not only the theoretical concepts, but also the practical estimation issues.Comment: submitte

    Information-theoretic approach for the characterization of interactions in nonlinear dynamical systems

    Get PDF
    Symbolic time series analysis provides us a solid and broadly used toolkit for the characterization of interactions between nonlinear dynamical systems. In this thesis, information-theoretic measures are evaluated with respect to their capability to characterize interactions between dynamical systems. We investigate several important limitations of these measures which may appear when experimental data exhibit strong correlations. It is demonstrated that a high degree of static and/or long-term temporal correlations can, in general, lead to the incorrect inference of directionality of interactions between underlying dynamical systems. In this thesis, we propose two complementary information-theoretic measures which can provide a better characterization of the directionality of interactions in cases where the influence of such correlations in data cannot be neglected. First, the proposed information-theoretic measures are applied to characterize interactions between dynamical model systems with known equations of motion. Finally, they are applied to characterize interactions between multi-channel electroencephalographic recordings from epilepsy patients undergoing the presurgical diagnostics.Informationstheoretischer Ansatz zur Charakterisierung von Interaktionen in nichtlinearen dynamischen Systemen Mit Hilfe der Zeitreihenanalyse können Interaktionen zwischen natürlichen dynamischen Systemen anhand experimenteller Daten charakterisiert werden. In den letzten Jahren wurde eine Reihe von Maßen vorgestellt, die darauf abzielen, neben der Interaktionsrichtung auch die Interaktionsstärke zu bestimmen. Die zur Charakterisierung von Interaktionsrichtungen konzipierte Transferentropie zeichnet sich gerade durch eine besonders hohe Rauschtoleranz gegenüber anderen Maßen aus. Ziel der vorliegenden Arbeit ist es, zwei Limitationen, die die Interpretierbarkeit der Charakterisierungen mit der bisher vorgeschlagenen Transferentropie einschränken, zu untersuchen und auszuräumen. Zum einen wird ein Verfahren entwickelt und implementiert, mit dem langreichweitige Korrelationen besser beobachtet werden können, zum anderen werden Korrekturen vorgeschlagen, die den Einfluss so genannter statischer Korrelationen berücksichtigen. Bei Charakterisierungen von Interaktionsrichtungen mit Hilfe der Transferentropie konnten langreichweitige Korrelationen nur durch die Abschätzung von hochdimensionalen Wahrscheinlichkeitsräumen berücksichtigt werden. Für diese Abschätzung sind sehr viele Datenpunkte innerhalb des Beobachtungsintervalls notwendig, was bei Felddaten, gemessen an unbekannten Systemen, mit der Annahme der Stationarität in einem Beobachtungsintervall konkurriert. Um diese Beschränkung zu umgehen, wird in dieser Dissertation eine Verallgemeinerung des Konzepts der Entropie im Sinne von Lempel-Ziv auf das Maß der Transferentropie übertragen. Hierdurch können langreichweitige Korrelationen ohne die Abschätzung eines hochdimensionalen Wahrscheinlichkeitsraums bestimmt werden. Zeitgleiche Korrelationen der zugrunde liegenden Signale - so genannte statische Korrelationen - können die Interpretierbarkeit der Charakterisierung einschränken. Zur Berücksichtigung statistischer Korrelationen mit den bisher vorgestellten Maßen war ebenfalls eine mit einem großen Rechenaufwand verbundene Abschätzung hochdimensionaler Wahrscheinlichkeiten notwendig. In der vorliegenden Dissertation wird eine Korrektur der Transferentropie zur Abschätzung der statischen Korrelationen vorgeschlagen, ohne höherdimensionale Terme berechnen zu müssen. Durch die in dieser Arbeit vorgestellten Maße und Korrekturen kann die Charakterisierung der Interaktionsrichtung verbessert werden. Dabei wird anhand prototypischer Modellsysteme mit chaotischen Dynamiken demonstriert, dass die Charakterisierungen mit Hilfe der vorgeschlagenen Maße und Korrekturen gerade bei Systemen, die ohne Zeitversatz interagieren, besser interpretierbar sind. Weiterhin wurden Interaktionsstärke und Interaktionsrichtung an Zeitreihen hirnelektrischer Aktivität von Epilepsiepatienten bestimmt und mit Charakterisierungen der Transferentropie verglichen. Hierbei lässt sich zusammenfassen, dass sich mit den in dieser Arbeit vorgestellten Maßen Kontraste unterschiedlicher Interaktionsrichtungen besser auflösen lassen

    Assessing coupling dynamics from an ensemble of time series

    Get PDF
    Finding interdependency relations between (possibly multivariate) time series provides valuable knowledge about the processes that generate the signals. Information theory sets a natural framework for non-parametric measures of several classes of statistical dependencies. However, a reliable estimation from information-theoretic functionals is hampered when the dependency to be assessed is brief or evolves in time. Here, we show that these limitations can be overcome when we have access to an ensemble of independent repetitions of the time series. In particular, we gear a data-efficient estimator of probability densities to make use of the full structure of trial-based measures. By doing so, we can obtain time-resolved estimates for a family of entropy combinations (including mutual information, transfer entropy, and their conditional counterparts) which are more accurate than the simple average of individual estimates over trials. We show with simulated and real data that the proposed approach allows to recover the time-resolved dynamics of the coupling between different subsystems

    Partial Transfer Entropy on Rank Vectors

    Full text link
    For the evaluation of information flow in bivariate time series, information measures have been employed, such as the transfer entropy (TE), the symbolic transfer entropy (STE), defined similarly to TE but on the ranks of the components of the reconstructed vectors, and the transfer entropy on rank vectors (TERV), similar to STE but forming the ranks for the future samples of the response system with regard to the current reconstructed vector. Here we extend TERV for multivariate time series, and account for the presence of confounding variables, called partial transfer entropy on ranks (PTERV). We investigate the asymptotic properties of PTERV, and also partial STE (PSTE), construct parametric significance tests under approximations with Gaussian and gamma null distributions, and show that the parametric tests cannot achieve the power of the randomization test using time-shifted surrogates. Using simulations on known coupled dynamical systems and applying parametric and randomization significance tests, we show that PTERV performs better than PSTE but worse than the partial transfer entropy (PTE). However, PTERV, unlike PTE, is robust to the presence of drifts in the time series and it is also not affected by the level of detrending.Comment: 21 pages, 6 figures, 3 tables, accepted in EPJ/S

    Disentangling causal webs in the brain using functional Magnetic Resonance Imaging: A review of current approaches

    Get PDF
    In the past two decades, functional Magnetic Resonance Imaging has been used to relate neuronal network activity to cognitive processing and behaviour. Recently this approach has been augmented by algorithms that allow us to infer causal links between component populations of neuronal networks. Multiple inference procedures have been proposed to approach this research question but so far, each method has limitations when it comes to establishing whole-brain connectivity patterns. In this work, we discuss eight ways to infer causality in fMRI research: Bayesian Nets, Dynamical Causal Modelling, Granger Causality, Likelihood Ratios, LiNGAM, Patel's Tau, Structural Equation Modelling, and Transfer Entropy. We finish with formulating some recommendations for the future directions in this area

    Measuring information-transfer delays

    Get PDF
    In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics

    Investigating Information Flows in Spiking Neural Networks With High Fidelity

    Get PDF
    The brains of many organisms are capable of a wide variety of complex computations. This capability must be undergirded by a more general purpose computational capacity. The exact nature of this capacity, how it is distributed across the brains of organisms and how it arises throughout the course of development is an open topic of scientific investigation. Individual neurons are widely considered to be the fundamental computational units of brains. Moreover, the finest scale at which large scale recordings of brain activity can be performed is the spiking activity of neurons and our ability to perform these recordings over large numbers of neurons and with fine spatial resolution is increasing rapidly. This makes the spiking activity of individual neurons a highly attractive data modality on which to study neural computation. The framework of information dynamics has proven to be a successful approach towards interrogating the capacity for general purpose computation. It does this by revealing the atomic information processing operations of information storage, transfer and modification. Unfortunately, the study of information flows and other information processing operations from the spiking activity of neurons has been severely hindered by the lack of effective tools for estimating these quantities on this data modality. This thesis remedies this situation by presenting an estimator for information flows, as measured by Transfer Entropy (TE), that operates in continuous time on event-based data such as spike trains. Unlike the previous approach to the estimation of this quantity, which discretised the process into time bins, this estimator operates on the raw inter-spike intervals. It is demonstrated to be far superior to the previous discrete-time approach in terms of consistency, rate of convergence and bias. Most importantly, unlike the discrete-time approach, which requires a hard tradeoff between capturing fine temporal precision or history effects occurring over reasonable time intervals, this estimator can capture history effects occurring over relatively large intervals without any loss of temporal precision. This estimator is applied to developing dissociated cultures of cortical rat neurons, therefore providing the first high-fidelity study of information flows on spiking data. It is found that the spatial structure of the flows locks in to a significant extent. at the point of their emergence and that certain nodes occupy specialised computational roles as either transmitters, receivers or mediators of information flow. Moreover, these roles are also found to lock in early. In order to fully understand the structure of neural information flows, however, we are required to go beyond pairwise interactions, and indeed multivariate information flows have become an important tool in the inference of effective networks from neuroscience data. These are directed networks where each node is connected to a minimal set of sources which maximally reduce the uncertainty in its present state. However, the application of multivariate information flows to the inference of effective networks from spiking data has been hampered by the above-mentioned issues with preexisting estimation techniques. Here, a greedy algorithm which iteratively builds a set of parents for each target node using multivariate transfer entropies, and which has already been well validated in the context of traditional discretely sampled time series, is adapted to use in conjunction with the newly developed estimator for event-based data. The combination of the greedy algorithm and continuous-time estimator is then validated on simulated examples for which the ground truth is known. The new capabilities in the estimation of information flows and the inference of effective networks on event-based data presented in this work represent a very substantial step forward in our ability to perform these analyses on the ever growing set of high resolution, large scale recordings of interacting neurons. As such, this work promises to enable substantial quantitative insights in the future regarding how neurons interact, how they process information, and how this changes under different conditions such as disease
    • …
    corecore