15,095 research outputs found
One-Dimensional Population Density Approaches to Recurrently Coupled Networks of Neurons with Noise
Mean-field systems have been previously derived for networks of coupled,
two-dimensional, integrate-and-fire neurons such as the Izhikevich, adapting
exponential (AdEx) and quartic integrate and fire (QIF), among others.
Unfortunately, the mean-field systems have a degree of frequency error and the
networks analyzed often do not include noise when there is adaptation. Here, we
derive a one-dimensional partial differential equation (PDE) approximation for
the marginal voltage density under a first order moment closure for coupled
networks of integrate-and-fire neurons with white noise inputs. The PDE has
substantially less frequency error than the mean-field system, and provides a
great deal more information, at the cost of analytical tractability. The
convergence properties of the mean-field system in the low noise limit are
elucidated. A novel method for the analysis of the stability of the
asynchronous tonic firing solution is also presented and implemented. Unlike
previous attempts at stability analysis with these network types, information
about the marginal densities of the adaptation variables is used. This method
can in principle be applied to other systems with nonlinear partial
differential equations.Comment: 26 Pages, 6 Figure
Theoretical connections between mathematical neuronal models corresponding to different expressions of noise
Identifying the right tools to express the stochastic aspects of neural
activity has proven to be one of the biggest challenges in computational
neuroscience. Even if there is no definitive answer to this issue, the most
common procedure to express this randomness is the use of stochastic models. In
accordance with the origin of variability, the sources of randomness are
classified as intrinsic or extrinsic and give rise to distinct mathematical
frameworks to track down the dynamics of the cell. While the external
variability is generally treated by the use of a Wiener process in models such
as the Integrate-and-Fire model, the internal variability is mostly expressed
via a random firing process. In this paper, we investigate how those distinct
expressions of variability can be related. To do so, we examine the probability
density functions to the corresponding stochastic models and investigate in
what way they can be mapped one to another via integral transforms. Our
theoretical findings offer a new insight view into the particular categories of
variability and it confirms that, despite their contrasting nature, the
mathematical formalization of internal and external variability are strikingly
similar
Integration of continuous-time dynamics in a spiking neural network simulator
Contemporary modeling approaches to the dynamics of neural networks consider
two main classes of models: biologically grounded spiking neurons and
functionally inspired rate-based units. The unified simulation framework
presented here supports the combination of the two for multi-scale modeling
approaches, the quantitative validation of mean-field approaches by spiking
network simulations, and an increase in reliability by usage of the same
simulation code and the same network model specifications for both model
classes. While most efficient spiking simulations rely on the communication of
discrete events, rate models require time-continuous interactions between
neurons. Exploiting the conceptual similarity to the inclusion of gap junctions
in spiking network simulations, we arrive at a reference implementation of
instantaneous and delayed interactions between rate-based models in a spiking
network simulator. The separation of rate dynamics from the general connection
and communication infrastructure ensures flexibility of the framework. We
further demonstrate the broad applicability of the framework by considering
various examples from the literature ranging from random networks to neural
field models. The study provides the prerequisite for interactions between
rate-based and spiking models in a joint simulation
Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities
In this paper we consider instabilities of localised solutions in planar neural field firing rate models of Wilson-Cowan or Amari type. Importantly we show that angular perturbations can destabilise spatially localised solutions. For a scalar model with Heaviside firing rate function we calculate symmetric one-bump and ring solutions explicitly and use an Evans function approach to predict the point of instability and the shapes of the dominant growing modes. Our predictions are shown to be in excellent agreement with direct numerical simulations. Moreover, beyond the instability our simulations demonstrate the emergence of multi-bump and labyrinthine patterns.
With the addition of spike-frequency adaptation, numerical simulations of the resulting vector model show that it is possible for structures without rotational symmetry, and in particular multi-bumps, to undergo an instability to a rotating wave. We use a general argument, valid for smooth firing rate functions, to establish the conditions necessary to generate such a rotational instability. Numerical continuation of the rotating wave is used to quantify the emergent angular velocity as a bifurcation parameter is varied. Wave stability is found via the numerical evaluation of an associated eigenvalue problem
Stochastic neural field theory and the system-size expansion
We analyze a master equation formulation of stochastic neurodynamics for a network of synaptically coupled homogeneous neuronal populations each consisting of N identical neurons. The state of the network is specified by the fraction of active or spiking neurons in each population, and transition rates are chosen so that in the thermodynamic or deterministic limit (N → ∞) we recover standard activity–based or voltage–based rate models. We derive the lowest order corrections to these rate equations for large but finite N using two different approximation schemes, one based on the Van Kampen system-size expansion and the other based on path integral methods. Both methods yield the same series expansion of the moment equations, which at O(1/N ) can be truncated to form a closed system of equations for the first and second order moments. Taking a continuum limit of the moment equations whilst keeping the system size N fixed generates a system of integrodifferential equations for the mean and covariance of the corresponding stochastic neural field model. We also show how the path integral approach can be used to study large deviation or rare event statistics underlying escape from the basin of attraction of a stable fixed point of the mean–field dynamics; such an analysis is not possible using the system-size expansion since the latter cannot accurately\ud
determine exponentially small transitions
Intrinsic gain modulation and adaptive neural coding
In many cases, the computation of a neural system can be reduced to a
receptive field, or a set of linear filters, and a thresholding function, or
gain curve, which determines the firing probability; this is known as a
linear/nonlinear model. In some forms of sensory adaptation, these linear
filters and gain curve adjust very rapidly to changes in the variance of a
randomly varying driving input. An apparently similar but previously unrelated
issue is the observation of gain control by background noise in cortical
neurons: the slope of the firing rate vs current (f-I) curve changes with the
variance of background random input. Here, we show a direct correspondence
between these two observations by relating variance-dependent changes in the
gain of f-I curves to characteristics of the changing empirical
linear/nonlinear model obtained by sampling. In the case that the underlying
system is fixed, we derive relationships relating the change of the gain with
respect to both mean and variance with the receptive fields derived from
reverse correlation on a white noise stimulus. Using two conductance-based
model neurons that display distinct gain modulation properties through a simple
change in parameters, we show that coding properties of both these models
quantitatively satisfy the predicted relationships. Our results describe how
both variance-dependent gain modulation and adaptive neural computation result
from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio
Existence and Stability of Standing Pulses in Neural Networks : I Existence
We consider the existence of standing pulse solutions of a neural network
integro-differential equation. These pulses are bistable with the zero state
and may be an analogue for short term memory in the brain. The network consists
of a single-layer of neurons synaptically connected by lateral inhibition. Our
work extends the classic Amari result by considering a non-saturating gain
function. We consider a specific connectivity function where the existence
conditions for single-pulses can be reduced to the solution of an algebraic
system. In addition to the two localized pulse solutions found by Amari, we
find that three or more pulses can coexist. We also show the existence of
nonconvex ``dimpled'' pulses and double pulses. We map out the pulse shapes and
maximum firing rates for different connection weights and gain functions.Comment: 31 pages, 29 figures, submitted to SIAM Journal on Applied Dynamical
System
- …