6,723 research outputs found

    Local learning algorithms for stochastic spiking neural networks

    Get PDF
    This dissertation focuses on the development of machine learning algorithms for spiking neural networks, with an emphasis on local three-factor learning rules that are in keeping with the constraints imposed by current neuromorphic hardware. Spiking neural networks (SNNs) are an alternative to artificial neural networks (ANNs) that follow a similar graphical structure but use a processing paradigm more closely modeled after the biological brain in an effort to harness its low power processing capability. SNNs use an event based processing scheme which leads to significant power savings when implemented in dedicated neuromorphic hardware such as Intel’s Loihi chip. This work is distinguished by the consideration of stochastic SNNs based on spiking neurons that employ a stochastic spiking process, implementing generalized linear models (GLM) rather than deterministic thresholded spiking. In this framework, the spiking signals are random variables which may be sampled from a distribution defined by the neurons. The spiking signals may be observed or latent variables, with neurons whose outputs are observed termed visible neurons and otherwise termed hidden neurons. This choice provides a strong mathematical basis for maximum likelihood optimization of the network parameters via stochastic gradient descent, avoiding the issue of gradient backpropagation through the discontinuity created by the spiking process. Three machine learning algorithms are developed for stochastic SNNs with a focus on power efficiency, learning efficiency and model adaptability; characteristics that are valuable in resource constrained settings. They are studied in the context of applications where low power learning on the edge is key. All of the learning rules that are derived include only local variables along with a global learning signal, making these algorithms tractable to implementation in current neuromorphic hardware. First, a stochastic SNN that includes only visible neurons, the simplest case for probabilistic optimization, is considered. A policy gradient reinforcement learning (RL) algorithm is developed in which the stochastic SNN defines the policy, or state-action distribution, of an RL agent. Action choices are sampled directly from the policy by interpreting the outputs of the read-out neurons using a first to spike decision rule. This study highlights the power efficiency of the SNN in terms of spike frequency. Next, an online meta-learning framework is proposed with the goal of progressively improving the learning efficiency of an SNN over a stream of tasks. In this setting, SNNs including both hidden and visible neurons are considered, posing a more complex maximum likelihood learning problem that is solved using a variational learning method. The meta-learning rule yields a hyperparameter initialization for SNN models that supports fast adaptation of the model to individualized data on edge devices. Finally, moving away from the supervised learning paradigm, a hybrid adver-sarial training framework for SNNs, termed SpikeGAN, is developed. Rather than optimize for the likelihood of target spike patterns at the SNN outputs, the training is mediated by an auxiliary discriminator that provides a measure of how similar the spiking data is to a target distribution. Because no direct spiking patterns are given, the SNNs considered in adversarial learning include only hidden neurons. A Bayesian adaptation of the SpikeGAN learning rule is developed to broaden the range of temporal data that a single SpikeGAN can estimate. Additionally, the online meta-learning rule is extended to include meta-learning for SpikeGAN, to enable efficient generation of data from sequential data distributions

    Learning to Recognize Actions from Limited Training Examples Using a Recurrent Spiking Neural Model

    Full text link
    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3%/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competetive accuracy with respect to state-of-the-art non-spiking neural models.Comment: 13 figures (includes supplementary information

    Towards Accurate and High-Speed Spiking Neuromorphic Systems with Data Quantization-Aware Deep Networks

    Full text link
    Deep Neural Networks (DNNs) have gained immense success in cognitive applications and greatly pushed today's artificial intelligence forward. The biggest challenge in executing DNNs is their extremely data-extensive computations. The computing efficiency in speed and energy is constrained when traditional computing platforms are employed in such computational hungry executions. Spiking neuromorphic computing (SNC) has been widely investigated in deep networks implementation own to their high efficiency in computation and communication. However, weights and signals of DNNs are required to be quantized when deploying the DNNs on the SNC, which results in unacceptable accuracy loss. %However, the system accuracy is limited by quantizing data directly in deep networks deployment. Previous works mainly focus on weights discretize while inter-layer signals are mainly neglected. In this work, we propose to represent DNNs with fixed integer inter-layer signals and fixed-point weights while holding good accuracy. We implement the proposed DNNs on the memristor-based SNC system as a deployment example. With 4-bit data representation, our results show that the accuracy loss can be controlled within 0.02% (2.3%) on MNIST (CIFAR-10). Compared with the 8-bit dynamic fixed-point DNNs, our system can achieve more than 9.8x speedup, 89.1% energy saving, and 30% area saving.Comment: 6 pages, 4 figure

    Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective

    Get PDF
    On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass mainstream computing technologies in tasks where real-time functionality, adaptability, and autonomy are essential. While algorithmic advances in neuromorphic computing are proceeding successfully, the potential of memristors to improve neuromorphic computing have not yet born fruit, primarily because they are often used as a drop-in replacement to conventional memory. However, interdisciplinary approaches anchored in machine learning theory suggest that multifactor plasticity rules matching neural and synaptic dynamics to the device capabilities can take better advantage of memristor dynamics and its stochasticity. Furthermore, such plasticity rules generally show much higher performance than that of classical Spike Time Dependent Plasticity (STDP) rules. This chapter reviews the recent development in learning with spiking neural network models and their possible implementation with memristor-based hardware

    Homogeneous Spiking Neuromorphic System for Real-World Pattern Recognition

    Get PDF
    A neuromorphic chip that combines CMOS analog spiking neurons and memristive synapses offers a promising solution to brain-inspired computing, as it can provide massive neural network parallelism and density. Previous hybrid analog CMOS-memristor approaches required extensive CMOS circuitry for training, and thus eliminated most of the density advantages gained by the adoption of memristor synapses. Further, they used different waveforms for pre and post-synaptic spikes that added undesirable circuit overhead. Here we describe a hardware architecture that can feature a large number of memristor synapses to learn real-world patterns. We present a versatile CMOS neuron that combines integrate-and-fire behavior, drives passive memristors and implements competitive learning in a compact circuit module, and enables in-situ plasticity in the memristor synapses. We demonstrate handwritten-digits recognition using the proposed architecture using transistor-level circuit simulations. As the described neuromorphic architecture is homogeneous, it realizes a fundamental building block for large-scale energy-efficient brain-inspired silicon chips that could lead to next-generation cognitive computing.Comment: This is a preprint of an article accepted for publication in IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol 5, no. 2, June 201

    Supervised Learning in Spiking Neural Networks with Phase-Change Memory Synapses

    Full text link
    Spiking neural networks (SNN) are artificial computational models that have been inspired by the brain's ability to naturally encode and process information in the time domain. The added temporal dimension is believed to render them more computationally efficient than the conventional artificial neural networks, though their full computational capabilities are yet to be explored. Recently, computational memory architectures based on non-volatile memory crossbar arrays have shown great promise to implement parallel computations in artificial and spiking neural networks. In this work, we experimentally demonstrate for the first time, the feasibility to realize high-performance event-driven in-situ supervised learning systems using nanoscale and stochastic phase-change synapses. Our SNN is trained to recognize audio signals of alphabets encoded using spikes in the time domain and to generate spike trains at precise time instances to represent the pixel intensities of their corresponding images. Moreover, with a statistical model capturing the experimental behavior of the devices, we investigate architectural and systems-level solutions for improving the training and inference performance of our computational memory-based system. Combining the computational potential of supervised SNNs with the parallel compute power of computational memory, the work paves the way for next-generation of efficient brain-inspired systems
    • …
    corecore