10,155 research outputs found

    Fractionally Predictive Spiking Neurons

    Full text link
    Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of power-law kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spike-trains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel.Comment: 13 pages, 5 figures, in Advances in Neural Information Processing 201

    Spiking Neurons Learning Phase Delays

    Get PDF
    Time differences between the two ears are an important cue for animals to azimuthally locate a sound source. The first binaural brainstem nucleus, in mammals the medial superior olive, is generally believed to perform the necessary computations. Its cells are sensitive to variations of interaural time differences of about 10 μs. The classical explanation of such a neuronal time-difference tuning is based on the physical concept of delay lines. Recent data, however, are inconsistent with a temporal delay and rather favor a phase delay. By means of a biophysical model we show how spike-timing-dependent synaptic learning explains precise interplay of excitation and inhibition and, hence, accounts for a physical realization of a phase delay

    Information recovery from rank-order encoded images

    Get PDF
    The work described in this paper is inspired by SpikeNET, a system developed to test the feasibility of using rank-order codes in modelling largescale networks of asynchronously spiking neurons. The rank-order code theory proposed by Thorpe concerns the encoding of information by a population of spiking neurons in the primate visual system. The theory proposes using the order of firing across a network of asynchronously firing spiking neurons as a neural code for information transmission. In this paper we aim to measure the perceptual similarity between the image input to a model retina, based on that originally designed and developed by VanRullen and Thorpe, and an image reconstructed from the rank-order encoding of the input image. We use an objective metric originally proposed by Petrovic to estimate perceptual edge preservation in image fusion which, after minor modifcations, is very much suited to our purpose. The results show that typically 75% of the edge information of the input stimulus is retained in the reconstructed image, and we show how the available information increases with successive spikes in the rank-order code

    Spike-based control monitoring and analysis with Address Event Representation

    Get PDF
    Neuromorphic engineering tries to mimic biological information processing. Address-Event Representation (AER) is a neuromorphic communication protocol for spiking neurons between different chips. We present a new way to drive robotic platforms using spiking neurons. We have simulated spiking control models for DC motors, and developed a mobile robot (Eddie) controlled only by spikes. We apply AER to the robot control, monitoring and measuring the spike activity inside the robot. The mobile robot is controlled by the AER-Robot tool, and the AER information is sent to a PC using the USBAERmini2 interface.Junta de Andalucía P06-TIC-01417Ministerio de Educación y Ciencia TEC2006-11730-C03-0

    An Efficient Threshold-Driven Aggregate-Label Learning Algorithm for Multimodal Information Processing

    Get PDF
    The aggregate-label learning paradigm tackles the long-standing temporary credit assignment (TCA) problem in neuroscience and machine learning, enabling spiking neural networks to learn multimodal sensory clues with delayed feedback signals. However, the existing aggregate-label learning algorithms only work for single spiking neurons, and with low learning efficiency, which limit their real-world applicability. To address these limitations, we first propose an efficient threshold-driven plasticity algorithm for spiking neurons, namely ETDP. It enables spiking neurons to generate the desired number of spikes that match the magnitude of delayed feedback signals and to learn useful multimodal sensory clues embedded within spontaneous spiking activities. Furthermore, we extend the ETDP algorithm to support multi-layer spiking neural networks (SNNs), which significantly improves the applicability of aggregate-label learning algorithms. We also validate the multi-layer ETDP learning algorithm in a multimodal computation framework for audio-visual pattern recognition. Experimental results on both synthetic and realistic datasets show significant improvements in the learning efficiency and model capacity over the existing aggregate-label learning algorithms. It, therefore, provides many opportunities for solving real-world multimodal pattern recognition tasks with spiking neural networks

    Efficient Computation in Adaptive Artificial Spiking Neural Networks

    Get PDF
    Artificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of communication. This contrasts sharply with biological neurons that communicate sparingly and efficiently using binary spikes. While artificial Spiking Neural Networks (SNNs) can be constructed by replacing the units of an ANN with spiking neurons, the current performance is far from that of deep ANNs on hard benchmarks and these SNNs use much higher firing rates compared to their biological counterparts, limiting their efficiency. Here we show how spiking neurons that employ an efficient form of neural coding can be used to construct SNNs that match high-performance ANNs and exceed state-of-the-art in SNNs on important benchmarks, while requiring much lower average firing rates. For this, we use spike-time coding based on the firing rate limiting adaptation phenomenon observed in biological spiking neurons. This phenomenon can be captured in adapting spiking neuron models, for which we derive the effective transfer function. Neural units in ANNs trained with this transfer function can be substituted directly with adaptive spiking neurons, and the resulting Adaptive SNNs (AdSNNs) can carry out inference in deep neural networks using up to an order of magnitude fewer spikes compared to previous SNNs. Adaptive spike-time coding additionally allows for the dynamic control of neural coding precision: we show how a simple model of arousal in AdSNNs further halves the average required firing rate and this notion naturally extends to other forms of attention. AdSNNs thus hold promise as a novel and efficient model for neural computation that naturally fits to temporally continuous and asynchronous applications
    corecore