3,433 research outputs found
A new approach to optimal control of conductance-based spiking neurons
This paper presents an algorithm for solving the minimum-energy optimal control problem of conductance-based spiking neurons. The basic procedure is (1) to construct a conductance-based spiking neuron oscillator as an affine nonlinear system, (2) to formulate the optimal control problem of the affine nonlinear system as a boundary value problem based on the Pontryagin’s maximum principle, and (3) to solve the boundary value problem using the homotopy perturbation method. The construction of the minimum-energy optimal control in the framework of the homotopy perturbation technique is novel and valid for a broad class of nonlinear conductance-based neuron models. The applicability of our method in the FitzHugh-Nagumo and Hindmarsh-Rose models is validated by simulations
A neural circuit for navigation inspired by C. elegans Chemotaxis
We develop an artificial neural circuit for contour tracking and navigation
inspired by the chemotaxis of the nematode Caenorhabditis elegans. In order to
harness the computational advantages spiking neural networks promise over their
non-spiking counterparts, we develop a network comprising 7-spiking neurons
with non-plastic synapses which we show is extremely robust in tracking a range
of concentrations. Our worm uses information regarding local temporal gradients
in sodium chloride concentration to decide the instantaneous path for foraging,
exploration and tracking. A key neuron pair in the C. elegans chemotaxis
network is the ASEL & ASER neuron pair, which capture the gradient of
concentration sensed by the worm in their graded membrane potentials. The
primary sensory neurons for our network are a pair of artificial spiking
neurons that function as gradient detectors whose design is adapted from a
computational model of the ASE neuron pair in C. elegans. Simulations show that
our worm is able to detect the set-point with approximately four times higher
probability than the optimal memoryless Levy foraging model. We also show that
our spiking neural network is much more efficient and noise-resilient while
navigating and tracking a contour, as compared to an equivalent non-spiking
network. We demonstrate that our model is extremely robust to noise and with
slight modifications can be used for other practical applications such as
obstacle avoidance. Our network model could also be extended for use in
three-dimensional contour tracking or obstacle avoidance
Simulation of networks of spiking neurons: A review of tools and strategies
We review different aspects of the simulation of spiking neural networks. We
start by reviewing the different types of simulation strategies and algorithms
that are currently implemented. We next review the precision of those
simulation strategies, in particular in cases where plasticity depends on the
exact timing of the spikes. We overview different simulators and simulation
environments presently available (restricted to those freely available, open
source and documented). For each simulation tool, its advantages and pitfalls
are reviewed, with an aim to allow the reader to identify which simulator is
appropriate for a given task. Finally, we provide a series of benchmark
simulations of different types of networks of spiking neurons, including
Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based
or conductance-based synapses, using clock-driven or event-driven integration
strategies. The same set of models are implemented on the different simulators,
and the codes are made available. The ultimate goal of this review is to
provide a resource to facilitate identifying the appropriate integration
strategy and simulation tool to use for a given modeling problem related to
spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of
Computational Neuroscience, in press (2007
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
Neuronal Synchronization Can Control the Energy Efficiency of Inter-Spike Interval Coding
The role of synchronous firing in sensory coding and cognition remains
controversial. While studies, focusing on its mechanistic consequences in
attentional tasks, suggest that synchronization dynamically boosts sensory
processing, others failed to find significant synchronization levels in such
tasks. We attempt to understand both lines of evidence within a coherent
theoretical framework. We conceptualize synchronization as an independent
control parameter to study how the postsynaptic neuron transmits the average
firing activity of a presynaptic population, in the presence of
synchronization. We apply the Berger-Levy theory of energy efficient
information transmission to interpret simulations of a Hodgkin-Huxley-type
postsynaptic neuron model, where we varied the firing rate and synchronization
level in the presynaptic population independently. We find that for a fixed
presynaptic firing rate the simulated postsynaptic interspike interval
distribution depends on the synchronization level and is well-described by a
generalized extreme value distribution. For synchronization levels of 15% to
50%, we find that the optimal distribution of presynaptic firing rate,
maximizing the mutual information per unit cost, is maximized at ~30%
synchronization level. These results suggest that the statistics and energy
efficiency of neuronal communication channels, through which the input rate is
communicated, can be dynamically adapted by the synchronization level.Comment: 47 pages, 14 figures, 2 Table
GeNN: a code generation framework for accelerated brain simulations
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ.
GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials,
Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/
- …