23,132 research outputs found
On directed information theory and Granger causality graphs
Directed information theory deals with communication channels with feedback.
When applied to networks, a natural extension based on causal conditioning is
needed. We show here that measures built from directed information theory in
networks can be used to assess Granger causality graphs of stochastic
processes. We show that directed information theory includes measures such as
the transfer entropy, and that it is the adequate information theoretic
framework needed for neuroscience applications, such as connectivity inference
problems.Comment: accepted for publications, Journal of Computational Neuroscienc
Measuring information-transfer delays
In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics
Information Flow in Computational Systems
We develop a theoretical framework for defining and identifying flows of
information in computational systems. Here, a computational system is assumed
to be a directed graph, with "clocked" nodes that send transmissions to each
other along the edges of the graph at discrete points in time. We are
interested in a definition that captures the dynamic flow of information about
a specific message, and which guarantees an unbroken "information path" between
appropriately defined inputs and outputs in the directed graph. Prior measures,
including those based on Granger Causality and Directed Information, fail to
provide clear assumptions and guarantees about when they correctly reflect
information flow about a message. We take a systematic approach---iterating
through candidate definitions and counterexamples---to arrive at a definition
for information flow that is based on conditional mutual information, and which
satisfies desirable properties, including the existence of information paths.
Finally, we describe how information flow might be detected in a noiseless
setting, and provide an algorithm to identify information paths on the
time-unrolled graph of a computational system.Comment: Significantly revised version which was accepted for publication at
the IEEE Transactions on Information Theor
Justification of Logarithmic Loss via the Benefit of Side Information
We consider a natural measure of relevance: the reduction in optimal
prediction risk in the presence of side information. For any given loss
function, this relevance measure captures the benefit of side information for
performing inference on a random variable under this loss function. When such a
measure satisfies a natural data processing property, and the random variable
of interest has alphabet size greater than two, we show that it is uniquely
characterized by the mutual information, and the corresponding loss function
coincides with logarithmic loss. In doing so, our work provides a new
characterization of mutual information, and justifies its use as a measure of
relevance. When the alphabet is binary, we characterize the only admissible
forms the measure of relevance can assume while obeying the specified data
processing property. Our results naturally extend to measuring causal influence
between stochastic processes, where we unify different causal-inference
measures in the literature as instantiations of directed information
Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network
It is crucial to ask how agents can achieve goals by generating action plans
using only partial models of the world acquired through habituated
sensory-motor experiences. Although many existing robotics studies use a
forward model framework, there are generalization issues with high degrees of
freedom. The current study shows that the predictive coding (PC) and active
inference (AIF) frameworks, which employ a generative model, can develop better
generalization by learning a prior distribution in a low dimensional latent
state space representing probabilistic structures extracted from well
habituated sensory-motor trajectories. In our proposed model, learning is
carried out by inferring optimal latent variables as well as synaptic weights
for maximizing the evidence lower bound, while goal-directed planning is
accomplished by inferring latent variables for maximizing the estimated lower
bound. Our proposed model was evaluated with both simple and complex robotic
tasks in simulation, which demonstrated sufficient generalization in learning
with limited training data by setting an intermediate value for a
regularization coefficient. Furthermore, comparative simulation results show
that the proposed model outperforms a conventional forward model in
goal-directed planning, due to the learned prior confining the search of motor
plans within the range of habituated trajectories.Comment: 30 pages, 19 figure
Inferring Regulatory Networks by Combining Perturbation Screens and Steady State Gene Expression Profiles
Reconstructing transcriptional regulatory networks is an important task in
functional genomics. Data obtained from experiments that perturb genes by
knockouts or RNA interference contain useful information for addressing this
reconstruction problem. However, such data can be limited in size and/or are
expensive to acquire. On the other hand, observational data of the organism in
steady state (e.g. wild-type) are more readily available, but their
informational content is inadequate for the task at hand. We develop a
computational approach to appropriately utilize both data sources for
estimating a regulatory network. The proposed approach is based on a three-step
algorithm to estimate the underlying directed but cyclic network, that uses as
input both perturbation screens and steady state gene expression data. In the
first step, the algorithm determines causal orderings of the genes that are
consistent with the perturbation data, by combining an exhaustive search method
with a fast heuristic that in turn couples a Monte Carlo technique with a fast
search algorithm. In the second step, for each obtained causal ordering, a
regulatory network is estimated using a penalized likelihood based method,
while in the third step a consensus network is constructed from the highest
scored ones. Extensive computational experiments show that the algorithm
performs well in reconstructing the underlying network and clearly outperforms
competing approaches that rely only on a single data source. Further, it is
established that the algorithm produces a consistent estimate of the regulatory
network.Comment: 24 pages, 4 figures, 6 table
- …