71 research outputs found

    The chronotron: a neuron that learns to fire temporally-precise spike patterns

    Get PDF
    In many cases, neurons process information carried by the precise timing of spikes. Here we show how neurons can learn to generate specific temporally-precise output spikes in response to input spike patterns, thus processing and memorizing information that is fully temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that is analytically-derived and highly efficient, and one that has a high degree of biological plausibility. We show how chronotrons can learn to classify their inputs and we study their memory capacity

    Spike-Based Classification of UCI Datasets with Multi-Layer Resume-Like Tempotron

    Get PDF
    Spiking neurons are a class of neuron models that represent information in timed sequences called ``spikes.\u27\u27 Though predominantly used in neuro-scientific investigations, spiking neural networks (SNN) can be applied to machine learning problems such as classification and regression. SNN are computationally more powerful per neuron than traditional neural networks. Though training time is slow on general purpose computers, spike-based hardware implementations are faster and have shown capability for ultra-low power consumption. Additionally, various SNN training algorithms have achieved comparable performance with the State of the Art on the Fisher Iris dataset. Our main contribution is a software implementation of the multilayer ReSuMe algorithm using the Tempotron principle. The XOR problem is solved in only 13.73 epochs on average. However, training time on four different UCI datasets is slow, and, although decent performance is seen, in most respects the accuracy of our SNN underperforms compared to other SNN, SVM, and ANN experiments. Additionally, our results on the UCI dataset are only preliminary, necessitating further tuning

    Functional Implications of Synaptic Spike Timing Dependent Plasticity and Anti-Hebbian Membrane Potential Dependent Plasticity

    Get PDF
    A central hypothesis of neuroscience is that the change of the strength of synaptic connections between neurons is the basis for learning in the animal brain. However, the rules underlying the activity dependent change as well as their functional consequences are not well understood. This thesis develops and investigates several different quantitative models of synaptic plasticity. In the first part, the Contribution Dynamics model of Spike Timing Dependent Plasticity (STDP) is presented. It is shown to provide a better fit to experimental data than previous models. Additionally, investigation of the response properties of the model synapse to oscillatory neuronal activity shows that synapses are sensitive to theta oscillations (4-10 Hz), which are known to boost learning in behavioral experiments. In the second part, a novel Membrane Potential Dependent Plasticity (MPDP) rule is developed, which can be used to train neurons to fire precisely timed output activity. Previously, this could only be achieved with artificial supervised learning rules, whereas MPDP is a local activity dependent mechanism that is supported by experimental results

    SuperSpike: Supervised learning in multi-layer spiking neural networks

    Full text link
    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in-vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in-silico. Here we revisit the problem of supervised learning in temporally coding multi-layer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three factor learning rule capable of training multi-layer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike-time patterns

    Supervised Learning in Multilayer Spiking Neural Networks

    Get PDF
    The current article introduces a supervised learning algorithm for multilayer spiking neural networks. The algorithm presented here overcomes some limitations of existing learning algorithms as it can be applied to neurons firing multiple spikes and it can in principle be applied to any linearisable neuron model. The algorithm is applied successfully to various benchmarks, such as the XOR problem and the Iris data set, as well as complex classifications problems. The simulations also show the flexibility of this supervised learning algorithm which permits different encodings of the spike timing patterns, including precise spike trains encoding.Comment: 38 pages, 4 figure
    • ā€¦
    corecore