29 research outputs found
On directed information theory and Granger causality graphs
Directed information theory deals with communication channels with feedback.
When applied to networks, a natural extension based on causal conditioning is
needed. We show here that measures built from directed information theory in
networks can be used to assess Granger causality graphs of stochastic
processes. We show that directed information theory includes measures such as
the transfer entropy, and that it is the adequate information theoretic
framework needed for neuroscience applications, such as connectivity inference
problems.Comment: accepted for publications, Journal of Computational Neuroscienc
Hierarchy of neural organization in the embryonic spinal cord: Granger-causality graph analysis of in vivo calcium imaging data
The recent development of genetically encoded calcium indicators enables
monitoring in vivo the activity of neuronal populations. Most analysis of these
calcium transients relies on linear regression analysis based on the sensory
stimulus applied or the behavior observed. To estimate the basic properties of
the functional neural circuitry, we propose a network-based approach based on
calcium imaging recorded at single cell resolution. Differently from previous
analysis based on cross-correlation, we used Granger-causality estimates to
infer activity propagation between the activities of different neurons. The
resulting functional networks were then modeled as directed graphs and
characterized in terms of connectivity and node centralities. We applied our
approach to calcium transients recorded at low frequency (4 Hz) in ventral
neurons of the zebrafish spinal cord at the embryonic stage when spontaneous
coiling of the tail occurs. Our analysis on population calcium imaging data
revealed a strong ipsilateral connectivity and a characteristic hierarchical
organization of the network hubs that supported established propagation of
activity from rostral to caudal spinal cord. Our method could be used for
detecting functional defects in neuronal circuitry during development and
pathological conditions
Permutation Complexity and Coupling Measures in Hidden Markov Models
In [Haruna, T. and Nakajima, K., 2011. Physica D 240, 1370-1377], the authors
introduced the duality between values (words) and orderings (permutations) as a
basis to discuss the relationship between information theoretic measures for
finite-alphabet stationary stochastic processes and their permutation
analogues. It has been used to give a simple proof of the equality between the
entropy rate and the permutation entropy rate for any finite-alphabet
stationary stochastic process and show some results on the excess entropy and
the transfer entropy for finite-alphabet stationary ergodic Markov processes.
In this paper, we extend our previous results to hidden Markov models and show
the equalities between various information theoretic complexity and coupling
measures and their permutation analogues. In particular, we show the following
two results within the realm of hidden Markov models with ergodic internal
processes: the two permutation analogues of the transfer entropy, the symbolic
transfer entropy and the transfer entropy on rank vectors, are both equivalent
to the transfer entropy if they are considered as the rates, and the directed
information theory can be captured by the permutation entropy approach.Comment: 26 page
Supervised estimation of Granger-based causality between time series
Brain effective connectivity aims to detect causal interactions between distinct brain units and it is typically studied through the analysis of direct measurements of the neural activity, e.g., magneto/electroencephalography (M/EEG) signals. The literature on methods for causal inference is vast. It includes model-based methods in which a generative model of the data is assumed and model-free methods that directly infer causality from the probability distribution of the underlying stochastic process. Here, we firstly focus on the model-based methods developed from the Granger criterion of causality, which assumes the autoregressive model of the data. Secondly, we introduce a new perspective, that looks at the problem in a way that is typical of the machine learning literature. Then, we formulate the problem of causality detection as a supervised learning task, by proposing a classification-based approach. A classifier is trained to identify causal interactions between time series for the chosen model and by means of a proposed feature space. In this paper, we are interested in comparing this classification-based approach with the standard Geweke measure of causality in the time domain, through simulation study. Thus, we customized our approach to the case of a MAR model and designed a feature space which contains causality measures based on the idea of precedence and predictability in time. Two variations of the supervised method are proposed and compared to a standard Granger causal analysis method. The results of the simulations show that the supervised method outperforms the standard approach, in particular it is more robust to noise. As evidence of the efficacy of the proposed method, we report the details of our submission to the causality detection competition of Biomag2014, where the proposed method reached the 2nd place. Moreover, as empirical application, we applied the supervised approach on a dataset of neural recordings of rats obtaining an important reduction in the false positive rate
Transfer entropy rate through Lempel-Ziv complexity
The transfer entropy and the transfer entropy rate are closely related concepts that measure information exchange between two dynamical systems. These measures allow us to study linear and nonlinear causality relations and can be estimated through the use of different methodologies. However, some of them assume a data model and/or are computationally expensive. This article depicts a methodology to estimate the transfer entropy rate between two systems through the Lempel-Ziv complexity. This methodology offers a set of advantages: It estimates the transfer entropy rate from two single discrete series of measures, it is not computationally expensive, and it does not assume any data model. The simulation results over three different unidirectional coupled dynamical systems suggest that this methodology can be used to assess the direction and strength of the information flow between systems. Moreover, it provides good estimations for short-length time series.Fil: Restrepo Rinckoar, Juan Felipe. Consejo Nacional de Investigaciones CientÃficas y Técnicas. Universidad Nacional de Entre RÃos. Instituto de Investigación y Desarrollo en BioingenierÃa y Bioinformática. Laboratorio de Señales y Dinámicas no Lineales; Argentina. Universidad Nacional de Entre RÃos. Facultad de IngenierÃa. Departamento de Matemática e Informática. Laboratorio de Señales y Dinámicas no Lineales; ArgentinaFil: Mateos, Diego MartÃn. Consejo Nacional de Investigaciones CientÃficas y Técnicas. Centro CientÃfico Tecnológico Conicet - Santa Fe. Instituto de Matemática Aplicada del Litoral. Universidad Nacional del Litoral. Instituto de Matemática Aplicada del Litoral; Argentina. Universidad Autónoma de Entre RÃos. Facultad de Ciencia y TecnologÃa; ArgentinaFil: Schlotthauer, Gaston. Consejo Nacional de Investigaciones CientÃficas y Técnicas. Universidad Nacional de Entre RÃos. Instituto de Investigación y Desarrollo en BioingenierÃa y Bioinformática. Laboratorio de Señales y Dinámicas no Lineales; Argentina. Universidad Nacional de Entre RÃos. Facultad de IngenierÃa. Departamento de Matemática e Informática. Laboratorio de Señales y Dinámicas no Lineales; Argentin