909 research outputs found

    Unsupervised Learning with Self-Organizing Spiking Neural Networks

    Full text link
    We present a system comprising a hybridization of self-organized map (SOM) properties with spiking neural networks (SNNs) that retain many of the features of SOMs. Networks are trained in an unsupervised manner to learn a self-organized lattice of filters via excitatory-inhibitory interactions among populations of neurons. We develop and test various inhibition strategies, such as growing with inter-neuron distance and two distinct levels of inhibition. The quality of the unsupervised learning algorithm is evaluated using examples with known labels. Several biologically-inspired classification tools are proposed and compared, including population-level confidence rating, and n-grams using spike motif algorithm. Using the optimal choice of parameters, our approach produces improvements over state-of-art spiking neural networks

    Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding

    Get PDF
    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of FILT in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find FILT to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of FILT to be consistent with that of the highly efficient E-learning Chronotron, but with the distinct advantage that FILT is also implementable as an online method for increased biological realism.Comment: 26 pages, 10 figures, this version is published in PLoS ONE and incorporates reviewer comment

    Smooth Exact Gradient Descent Learning in Spiking Neural Networks

    Full text link
    Artificial neural networks are highly successfully trained with backpropagation. For spiking neural networks, however, a similar gradient descent scheme seems prohibitive due to the sudden, disruptive (dis-)appearance of spikes. Here, we demonstrate exact gradient descent learning based on spiking dynamics that change only continuously. These are generated by neuron models whose spikes vanish and appear at the end of a trial, where they do not influence other neurons anymore. This also enables gradient-based spike addition and removal. We apply our learning scheme to induce and continuously move spikes to desired times, in single neurons and recurrent networks. Further, it achieves competitive performance in a benchmark task using deep, initially silent networks. Our results show how non-disruptive learning is possible despite discrete spikes

    Nonlinear Hebbian learning as a unifying principle in receptive field formation

    Get PDF
    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities
    • …
    corecore