572 research outputs found
Spiking Neural P Systems with Addition/Subtraction Computing on Synapses
Spiking neural P systems (SN P systems, for short) are a class of distributed
and parallel computing models inspired from biological spiking neurons. In this paper,
we introduce a variant called SN P systems with addition/subtraction computing on
synapses (CSSN P systems). CSSN P systems are inspired and motivated by the shunting
inhibition of biological synapses, while incorporating ideas from dynamic graphs and
networks. We consider addition and subtraction operations on synapses, and prove that
CSSN P systems are computationally universal as number generators, under a normal
form (i.e. a simplifying set of restrictions)
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
Non-perturbative renormalization group analysis of nonlinear spiking networks
The critical brain hypothesis posits that neural circuits may operate close
to critical points of a phase transition, which has been argued to have
functional benefits for neural computation. Theoretical and computational
studies arguing for or against criticality in neural dynamics largely rely on
establishing power laws or scaling functions of statistical quantities, while a
proper understanding of critical phenomena requires a renormalization group
(RG) analysis. However, neural activity is typically non-Gaussian, nonlinear,
and non-local, rendering models that capture all of these features difficult to
study using standard statistical physics techniques. Here, we overcome these
issues by adapting the non-perturbative renormalization group (NPRG) to work on
(symmetric) network models of stochastic spiking neurons. By deriving a pair of
Ward-Takahashi identities and making a ``local potential approximation,'' we
are able to calculate non-universal quantities such as the effective firing
rate nonlinearity of the network, allowing improved quantitative estimates of
network statistics. We also derive the dimensionless flow equation that admits
universal critical points in the renormalization group flow of the model, and
identify two important types of critical points: in networks with an absorbing
state there is Directed Percolation (DP) fixed point corresponding to a
non-equilibrium phase transition between sustained activity and extinction of
activity, and in spontaneously active networks there is a \emph{complex valued}
critical point, corresponding to a spinodal transition observed, e.g., in the
Lee-Yang model of Ising magnets with explicitly broken symmetry. Our
Ward-Takahashi identities imply trivial dynamical exponents in
both cases, rendering it unclear whether these critical points fall into the
known DP or Ising universality classes
The Kinetic Basis of Self-Organized Pattern Formation
In his seminal paper on morphogenesis (1952), Alan Turing demonstrated that
different spatio-temporal patterns can arise due to instability of the
homogeneous state in reaction-diffusion systems, but at least two species are
necessary to produce even the simplest stationary patterns. This paper is aimed
to propose a novel model of the analog (continuous state) kinetic automaton and
to show that stationary and dynamic patterns can arise in one-component
networks of kinetic automata. Possible applicability of kinetic networks to
modeling of real-world phenomena is also discussed.Comment: 8 pages, submitted to the 14th International Conference on the
Synthesis and Simulation of Living Systems (Alife 14) on 23.03.2014, accepted
09.05.201
Stochastic neural field theory and the system-size expansion
We analyze a master equation formulation of stochastic neurodynamics for a network of synaptically coupled homogeneous neuronal populations each consisting of N identical neurons. The state of the network is specified by the fraction of active or spiking neurons in each population, and transition rates are chosen so that in the thermodynamic or deterministic limit (N → ∞) we recover standard activity–based or voltage–based rate models. We derive the lowest order corrections to these rate equations for large but finite N using two different approximation schemes, one based on the Van Kampen system-size expansion and the other based on path integral methods. Both methods yield the same series expansion of the moment equations, which at O(1/N ) can be truncated to form a closed system of equations for the first and second order moments. Taking a continuum limit of the moment equations whilst keeping the system size N fixed generates a system of integrodifferential equations for the mean and covariance of the corresponding stochastic neural field model. We also show how the path integral approach can be used to study large deviation or rare event statistics underlying escape from the basin of attraction of a stable fixed point of the mean–field dynamics; such an analysis is not possible using the system-size expansion since the latter cannot accurately\ud
determine exponentially small transitions
Metastability, Criticality and Phase Transitions in brain and its Models
This essay extends the previously deposited paper "Oscillations, Metastability and Phase Transitions" to incorporate the theory of Self-organizing Criticality. The twin concepts of Scaling and Universality of the theory of nonequilibrium phase transitions is applied to the role of reentrant activity in neural circuits of cerebral cortex and subcortical neural structures
- …