523 research outputs found
Spiking Dynamics during Perceptual Grouping in the Laminar Circuits of Visual Cortex
Grouping of collinear boundary contours is a fundamental process during visual perception. Illusory contour completion vividly illustrates how stable perceptual boundaries interpolate between pairs of contour inducers, but do not extrapolate from a single inducer. Neural models have simulated how perceptual grouping occurs in laminar visual cortical circuits. These models predicted the existence of grouping cells that obey a bipole property whereby grouping can occur inwardly between pairs or greater numbers of similarly oriented and co-axial inducers, but not outwardly from individual inducers. These models have not, however, incorporated spiking dynamics. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity to inducer configurations occur despite irregularities in spike timing across all the interacting cells. Other models have demonstrated spiking dynamics in laminar neocortical circuits, but not how perceptual grouping occurs. The current model begins to unify these two modeling streams by implementing a laminar cortical network of spiking cells whose intracellular temporal dynamics interact with recurrent intercellular spiking interactions to quantitatively simulate data from neurophysiological experiments about perceptual grouping, the structure of non-classical visual receptive fields, and gamma oscillations.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001); Defense Advanced Research Project Agency (HR001-09-C-0011
Simulation of networks of spiking neurons: A review of tools and strategies
We review different aspects of the simulation of spiking neural networks. We
start by reviewing the different types of simulation strategies and algorithms
that are currently implemented. We next review the precision of those
simulation strategies, in particular in cases where plasticity depends on the
exact timing of the spikes. We overview different simulators and simulation
environments presently available (restricted to those freely available, open
source and documented). For each simulation tool, its advantages and pitfalls
are reviewed, with an aim to allow the reader to identify which simulator is
appropriate for a given task. Finally, we provide a series of benchmark
simulations of different types of networks of spiking neurons, including
Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based
or conductance-based synapses, using clock-driven or event-driven integration
strategies. The same set of models are implemented on the different simulators,
and the codes are made available. The ultimate goal of this review is to
provide a resource to facilitate identifying the appropriate integration
strategy and simulation tool to use for a given modeling problem related to
spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of
Computational Neuroscience, in press (2007
Hardware design of LIF with Latency neuron model with memristive STDP synapses
In this paper, the hardware implementation of a neuromorphic system is
presented. This system is composed of a Leaky Integrate-and-Fire with Latency
(LIFL) neuron and a Spike-Timing Dependent Plasticity (STDP) synapse. LIFL
neuron model allows to encode more information than the common
Integrate-and-Fire models, typically considered for neuromorphic
implementations. In our system LIFL neuron is implemented using CMOS circuits
while memristor is used for the implementation of the STDP synapse. A
description of the entire circuit is provided. Finally, the capabilities of the
proposed architecture have been evaluated by simulating a motif composed of
three neurons and two synapses. The simulation results confirm the validity of
the proposed system and its suitability for the design of more complex spiking
neural network
Macroscopic equations governing noisy spiking neuronal populations
At functional scales, cortical behavior results from the complex interplay of
a large number of excitable cells operating in noisy environments. Such systems
resist to mathematical analysis, and computational neurosciences have largely
relied on heuristic partial (and partially justified) macroscopic models, which
successfully reproduced a number of relevant phenomena. The relationship
between these macroscopic models and the spiking noisy dynamics of the
underlying cells has since then been a great endeavor. Based on recent
mean-field reductions for such spiking neurons, we present here {a principled
reduction of large biologically plausible neuronal networks to firing-rate
models, providing a rigorous} relationship between the macroscopic activity of
populations of spiking neurons and popular macroscopic models, under a few
assumptions (mainly linearity of the synapses). {The reduced model we derive
consists of simple, low-dimensional ordinary differential equations with}
parameters and {nonlinearities derived from} the underlying properties of the
cells, and in particular the noise level. {These simple reduced models are
shown to reproduce accurately the dynamics of large networks in numerical
simulations}. Appropriate parameters and functions are made available {online}
for different models of neurons: McKean, Fitzhugh-Nagumo and Hodgkin-Huxley
models
Gain control network conditions in early sensory coding
Gain control is essential for the proper function of any sensory system. However, the precise mechanisms for achieving effective gain control in the brain are unknown. Based on our understanding of the existence and strength of connections in the insect olfactory system, we analyze the conditions that lead to controlled gain in a randomly connected network of excitatory and inhibitory neurons. We consider two scenarios for the variation of input into the system. In the first case, the intensity of the sensory input controls the input currents to a fixed proportion of neurons of the excitatory and inhibitory populations. In the second case, increasing intensity of the sensory stimulus will both, recruit an increasing number of neurons that receive input and change the input current that they receive. Using a mean field approximation for the network activity we derive relationships between the parameters of the network that ensure that the overall level of activity
of the excitatory population remains unchanged for increasing intensity of the external stimulation. We find that, first, the main parameters that regulate network gain are the probabilities of connections from the inhibitory population to the excitatory population and of the connections within the inhibitory population. Second, we show that strict gain control is not achievable in a random network in the second case, when the input recruits an increasing number of neurons. Finally, we confirm that the gain control conditions derived from the mean field approximation are valid in simulations of firing rate
models and Hodgkin-Huxley conductance based models
Bifurcation analysis in a silicon neuron
International audienceIn this paper, we describe an analysis of the nonlinear dynamical phenomenon associated with a silicon neuron. Our silicon neuron integrates Hodgkin-Huxley (HH) model formalism, including the membrane voltage dependency of temporal dynamics. Analysis of the bifurcation conditions allow us to identify different regimes in the parameter space that are desirable for biasing our silicon neuron. This approach of studying bifurcations is useful because it is believed that computational properties of neurons are based on the bifurcations exhibited by these dynamical systems in response to some changing stimulus. We describe numerical simulations and measurements of the Hopf bifurcation which is characteristic of class 2 excitability in the HH model. We also show a phenomenon observed in biological neurons and termed excitation block. Hence, by showing that this silicon neuron has similar bifurcations to a certain class of biological neurons, we can claim that the silicon neuron can also perform similar computation
- …