5,254 research outputs found
Neuronal assembly dynamics in supervised and unsupervised learning scenarios
The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system’s variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions
Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes
Exploiting the theory of state space models, we derive the exact expressions
of the information transfer, as well as redundant and synergistic transfer, for
coupled Gaussian processes observed at multiple temporal scales. All of the
terms, constituting the frameworks known as interaction information
decomposition and partial information decomposition, can thus be analytically
obtained for different time scales from the parameters of the VAR model that
fits the processes. We report the application of the proposed methodology
firstly to benchmark Gaussian systems, showing that this class of systems may
generate patterns of information decomposition characterized by mainly
redundant or synergistic information transfer persisting across multiple time
scales or even by the alternating prevalence of redundant and synergistic
source interaction depending on the time scale. Then, we apply our method to an
important topic in neuroscience, i.e., the detection of causal interactions in
human epilepsy networks, for which we show the relevance of partial information
decomposition to the detection of multiscale information transfer spreading
from the seizure onset zone
The information transmitted by spike patterns in single neurons
Spike patterns have been reported to encode sensory information in several
brain areas. Here we assess the role of specific patterns in the neural code,
by comparing the amount of information transmitted with different choices of
the readout neural alphabet. This allows us to rank several alternative
alphabets depending on the amount of information that can be extracted from
them. One can thereby identify the specific patterns that constitute the most
prominent ingredients of the code. We finally discuss the interplay of
categorical and temporal information in the amount of synergy or redundancy in
the neural code.Comment: To be published in Journal of Physiology Paris 200
Network information and connected correlations
Entropy and information provide natural measures of correlation among
elements in a network. We construct here the information theoretic analog of
connected correlation functions: irreducible --point correlation is measured
by a decrease in entropy for the joint distribution of variables relative
to the maximum entropy allowed by all the observed variable
distributions. We calculate the ``connected information'' terms for several
examples, and show that it also enables the decomposition of the information
that is carried by a population of elements about an outside source.Comment: 4 pages, 3 figure
Bits from Biology for Computational Intelligence
Computational intelligence is broadly defined as biologically-inspired
computing. Usually, inspiration is drawn from neural systems. This article
shows how to analyze neural systems using information theory to obtain
constraints that help identify the algorithms run by such systems and the
information they represent. Algorithms and representations identified
information-theoretically may then guide the design of biologically inspired
computing systems (BICS). The material covered includes the necessary
introduction to information theory and the estimation of information theoretic
quantities from neural data. We then show how to analyze the information
encoded in a system about its environment, and also discuss recent
methodological developments on the question of how much information each agent
carries about the environment either uniquely, or redundantly or
synergistically together with others. Last, we introduce the framework of local
information dynamics, where information processing is decomposed into component
processes of information storage, transfer, and modification -- locally in
space and time. We close by discussing example applications of these measures
to neural data and other complex systems
- …