8,627 research outputs found
Spiking Neurons Learning Phase Delays
Time differences between the two ears are an important cue for animals to azimuthally locate a sound source. The first binaural brainstem nucleus, in mammals the medial superior olive, is generally believed to perform the necessary computations. Its cells are sensitive to variations of interaural time differences of about 10 μs. The classical explanation of such a neuronal time-difference tuning is based on the physical concept of delay lines. Recent data, however, are inconsistent with a temporal delay and rather favor a phase delay. By means of a biophysical model we show how spike-timing-dependent synaptic learning explains precise interplay of excitation and inhibition and, hence, accounts for a physical realization of a phase delay
Spike-based control monitoring and analysis with Address Event Representation
Neuromorphic engineering tries to mimic biological
information processing. Address-Event Representation (AER) is
a neuromorphic communication protocol for spiking neurons
between different chips. We present a new way to drive robotic
platforms using spiking neurons. We have simulated spiking
control models for DC motors, and developed a mobile robot
(Eddie) controlled only by spikes. We apply AER to the robot
control, monitoring and measuring the spike activity inside the
robot. The mobile robot is controlled by the AER-Robot tool,
and the AER information is sent to a PC using the
USBAERmini2 interface.Junta de Andalucía P06-TIC-01417Ministerio de Educación y Ciencia TEC2006-11730-C03-0
An Efficient Threshold-Driven Aggregate-Label Learning Algorithm for Multimodal Information Processing
The aggregate-label learning paradigm tackles the long-standing temporary credit assignment (TCA) problem in neuroscience and machine learning, enabling spiking neural networks to learn multimodal sensory clues with delayed feedback signals. However, the existing aggregate-label learning algorithms only work for single spiking neurons, and with low learning efficiency, which limit their real-world applicability. To address these limitations, we first propose an efficient threshold-driven plasticity algorithm for spiking neurons, namely ETDP. It enables spiking neurons to generate the desired number of spikes that match the magnitude of delayed feedback signals and to learn useful multimodal sensory clues embedded within spontaneous spiking activities. Furthermore, we extend the ETDP algorithm to support multi-layer spiking neural networks (SNNs), which significantly improves the applicability of aggregate-label learning algorithms. We also validate the multi-layer ETDP learning algorithm in a multimodal computation framework for audio-visual pattern recognition. Experimental results on both synthetic and realistic datasets show significant improvements in the learning efficiency and model capacity over the existing aggregate-label learning algorithms. It, therefore, provides many opportunities for solving real-world multimodal pattern recognition tasks with spiking neural networks
Efficient Computation in Adaptive Artificial Spiking Neural Networks
Artificial Neural Networks (ANNs) are bio-inspired models of neural
computation that have proven highly effective. Still, ANNs lack a natural
notion of time, and neural units in ANNs exchange analog values in a
frame-based manner, a computationally and energetically inefficient form of
communication. This contrasts sharply with biological neurons that communicate
sparingly and efficiently using binary spikes. While artificial Spiking Neural
Networks (SNNs) can be constructed by replacing the units of an ANN with
spiking neurons, the current performance is far from that of deep ANNs on hard
benchmarks and these SNNs use much higher firing rates compared to their
biological counterparts, limiting their efficiency. Here we show how spiking
neurons that employ an efficient form of neural coding can be used to construct
SNNs that match high-performance ANNs and exceed state-of-the-art in SNNs on
important benchmarks, while requiring much lower average firing rates. For
this, we use spike-time coding based on the firing rate limiting adaptation
phenomenon observed in biological spiking neurons. This phenomenon can be
captured in adapting spiking neuron models, for which we derive the effective
transfer function. Neural units in ANNs trained with this transfer function can
be substituted directly with adaptive spiking neurons, and the resulting
Adaptive SNNs (AdSNNs) can carry out inference in deep neural networks using up
to an order of magnitude fewer spikes compared to previous SNNs. Adaptive
spike-time coding additionally allows for the dynamic control of neural coding
precision: we show how a simple model of arousal in AdSNNs further halves the
average required firing rate and this notion naturally extends to other forms
of attention. AdSNNs thus hold promise as a novel and efficient model for
neural computation that naturally fits to temporally continuous and
asynchronous applications
Exact computation of the Maximum Entropy Potential of spiking neural networks models
Understanding how stimuli and synaptic connectivity in uence the statistics
of spike patterns in neural networks is a central question in computational
neuroscience. Maximum Entropy approach has been successfully used to
characterize the statistical response of simultaneously recorded spiking
neurons responding to stimuli. But, in spite of good performance in terms of
prediction, the fitting parameters do not explain the underlying mechanistic
causes of the observed correlations. On the other hand, mathematical models of
spiking neurons (neuro-mimetic models) provide a probabilistic mapping between
stimulus, network architecture and spike patterns in terms of conditional
proba- bilities. In this paper we build an exact analytical mapping between
neuro-mimetic and Maximum Entropy models.Comment: arXiv admin note: text overlap with arXiv:1309.587
- …