3,211 research outputs found
How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation
This paper addresses two questions in the context of neuronal networks
dynamics, using methods from dynamical systems theory and statistical physics:
(i) How to characterize the statistical properties of sequences of action
potentials ("spike trains") produced by neuronal networks ? and; (ii) what are
the effects of synaptic plasticity on these statistics ? We introduce a
framework in which spike trains are associated to a coding of membrane
potential trajectories, and actually, constitute a symbolic coding in important
explicit examples (the so-called gIF models). On this basis, we use the
thermodynamic formalism from ergodic theory to show how Gibbs distributions are
natural probability measures to describe the statistics of spike trains, given
the empirical averages of prescribed quantities. As a second result, we show
that Gibbs distributions naturally arise when considering "slow" synaptic
plasticity rules where the characteristic time for synapse adaptation is quite
longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure
On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses
We present a mathematical analysis of a networks with Integrate-and-Fire
neurons and adaptive conductances. Taking into account the realistic fact that
the spike time is only known within some \textit{finite} precision, we propose
a model where spikes are effective at times multiple of a characteristic time
scale , where can be \textit{arbitrary} small (in particular,
well beyond the numerical precision). We make a complete mathematical
characterization of the model-dynamics and obtain the following results. The
asymptotic dynamics is composed by finitely many stable periodic orbits, whose
number and period can be arbitrary large and can diverge in a region of the
synaptic weights space, traditionally called the "edge of chaos", a notion
mathematically well defined in the present paper. Furthermore, except at the
edge of chaos, there is a one-to-one correspondence between the membrane
potential trajectories and the raster plot. This shows that the neural code is
entirely "in the spikes" in this case. As a key tool, we introduce an order
parameter, easy to compute numerically, and closely related to a natural notion
of entropy, providing a relevant characterization of the computational
capabilities of the network. This allows us to compare the computational
capabilities of leaky and Integrate-and-Fire models and conductance based
models. The present study considers networks with constant input, and without
time-dependent plasticity, but the framework has been designed for both
extensions.Comment: 36 pages, 9 figure
Computational physics of the mind
In the XIX century and earlier such physicists as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of mind. In this paper several approaches relevant to modeling of mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From computational point of view realistic models require massively parallel architectures
Radar signal categorization using a neural network
Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications
Number Processing Pathways in Human Parietal Cortex
Numerous studies have identified the intraparietal sulcus (IPS) as an area critically involved in numerical processing. IPS neurons in macaques are tuned to a preferred numerosity, hence neurally coding numerosity in a number-selective way. Neuroimaging studies in humans have demonstrated number-selective processing in the anterior parts of the IPS. Nevertheless, the processes that convert visual input into a number-selective neural code remain unknown. Computational studies have suggested that a neural coding stage that is sensitive, but not selective to number, precedes number-selective coding when processing nonsymbolic quantities but not when processing symbolic quantities. In Experiment 1, we used functional magnetic resonance imaging to localize number-sensitive areas in the human brain by searching for areas exhibiting increasing activation with increasing number, carefully controlling for nonnumerical parameters. An area in posterior superior parietal cortex was identified as a substrate for the intermediate number-sensitive steps required for processing nonsymbolic quantities. In Experiment 2, the interpretation of Experiment 1 was confirmed with a connectivity analysis showing that a shared number-selective representation in IPS is reached through different pathways for symbolic versus nonsymbolic quantities. The preferred pathway for processing nonsymbolic quantities included the number-sensitive area in superior parietal cortex, whereas the pathway for processing symbolic quantities did not
Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression
In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield competitive results. In this paper, we present a novel no-reference bitstream-based objective video quality metric that is constructed by genetic programming-based symbolic regression. A key benefit of this approach is that it calculates reliable white-box models that allow us to determine the importance of the parameters. Additionally, these models can provide human insight into the underlying principles of subjective video quality assessment. Numerical results show that perceived quality can be modeled with high accuracy using only parameters extracted from the received video bitstream
Bits from Biology for Computational Intelligence
Computational intelligence is broadly defined as biologically-inspired
computing. Usually, inspiration is drawn from neural systems. This article
shows how to analyze neural systems using information theory to obtain
constraints that help identify the algorithms run by such systems and the
information they represent. Algorithms and representations identified
information-theoretically may then guide the design of biologically inspired
computing systems (BICS). The material covered includes the necessary
introduction to information theory and the estimation of information theoretic
quantities from neural data. We then show how to analyze the information
encoded in a system about its environment, and also discuss recent
methodological developments on the question of how much information each agent
carries about the environment either uniquely, or redundantly or
synergistically together with others. Last, we introduce the framework of local
information dynamics, where information processing is decomposed into component
processes of information storage, transfer, and modification -- locally in
space and time. We close by discussing example applications of these measures
to neural data and other complex systems
Optimal network topologies for information transmission in active networks
This work clarifies the relation between network circuit (topology) and
behavior (information transmission and synchronization) in active networks,
e.g. neural networks. As an application, we show how to determine a network
topology that is optimal for information transmission. By optimal, we mean that
the network is able to transmit a large amount of information, it possesses a
large number of communication channels, and it is robust under large variations
of the network coupling configuration. This theoretical approach is general and
does not depend on the particular dynamic of the elements forming the network,
since the network topology can be determined by finding a Laplacian matrix (the
matrix that describes the connections and the coupling strengths among the
elements) whose eigenvalues satisfy some special conditions. To illustrate our
ideas and theoretical approaches, we use neural networks of electrically
connected chaotic Hindmarsh-Rose neurons.Comment: 20 pages, 12 figure
Culture and Cancer
Genetic mechanisms, since they broadly involve information
transmission, should be translatable into information dynamics formalism. From this perspective we reconsider the adaptive mutator, one possible means of 'second order selection' by which a highly structured 'language' of environment and development writes itself onto the variation upon which evolutionary selection and tumorigenesis operate. Our approach uses recent results in the spirit of the Large Deviations Program of applied probability that permit transfer of phase transition approaches from statistical mechanics to information theory, generating evolutionary and developmental punctuation in what we claim to be a highly natural manner
- âŚ