1,481 research outputs found

    SuperSpike: Supervised learning in multi-layer spiking neural networks

    Full text link
    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in-vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in-silico. Here we revisit the problem of supervised learning in temporally coding multi-layer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three factor learning rule capable of training multi-layer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike-time patterns

    The chronotron: a neuron that learns to fire temporally-precise spike patterns

    Get PDF
    In many cases, neurons process information carried by the precise timing of spikes. Here we show how neurons can learn to generate specific temporally-precise output spikes in response to input spike patterns, thus processing and memorizing information that is fully temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that is analytically-derived and highly efficient, and one that has a high degree of biological plausibility. We show how chronotrons can learn to classify their inputs and we study their memory capacity

    Fast and robust learning by reinforcement signals: explorations in the insect brain

    Get PDF
    We propose a model for pattern recognition in the insect brain. Departing from a well-known body of knowledge about the insect brain, we investigate which of the potentially present features may be useful to learn input patterns rapidly and in a stable manner. The plasticity underlying pattern recognition is situated in the insect mushroom bodies and requires an error signal to associate the stimulus with a proper response. As a proof of concept, we used our model insect brain to classify the well-known MNIST database of handwritten digits, a popular benchmark for classiļ¬ers. We show that the structural organization of the insect brain appears to be suitable for both fast learning of new stimuli and reasonable performance in stationary conditions. Furthermore, it is extremely robust to damage to the brain structures involved in sensory processing. Finally, we suggest that spatiotemporal dynamics can improve the level of conļ¬dence in a classiļ¬cation decision. The proposed approach allows testing the effect of hypothesized mechanisms rather than speculating on their beneļ¬t for system performance or conļ¬dence in its responses

    Evolving, dynamic clustering of spatio/spectro-temporal data in 3D spiking neural network models and a case study on EEG data

    Get PDF
    Clustering is a fundamental data processing technique. While clustering of static (vector based) data and of fixed window size time series have been well explored, dynamic clustering of spatiotemporal data has been little researched if at all. Especially when patterns of changes (events) in the data across space and time have to be captured and understood. The paper presents novel methods for clustering of spatiotemporal data using the NeuCube spiking neural network (SNN) architecture. Clusters of spatiotemporal data were created and modified on-line in a continuous, incremental way, where spatiotemporal relationships of changes in variables are incrementally learned in a 3D SNN model and the model connectivity and spiking activity are incrementally clustered. Two clustering methods were proposed for SNN, one performed during unsupervised and oneā€”during supervised learning models. Before submitted to the models, the data is encoded as spike trains, a spike representing a change in the variable value (an event). During the unsupervised learning, the cluster centres were predefined by the spatial locations of the input data variables in a 3D SNN model. Then clusters are evolving during the learning, i.e. they are adapted continuously over time reflecting the dynamics of the changes in the data. In the supervised learning, clusters represent the dynamic sequence of neuron spiking activities in a trained SNN model, specific for a particular class of data or for an individual instance. We illustrate the proposed clustering method on a real case study of spatiotemporal EEG data, recorded from three groups of subjects during a cognitive task. The clusters were referred back to the brain data for a better understanding of the data and the processes that generated it. The cluster analysis allowed to discover and understand differences on temporal sequences and spatial involvement of brain regions in response to a cognitive task
    • ā€¦
    corecore