13 research outputs found
Spiking Inception Module for Multi-layer Unsupervised Spiking Neural Networks
Spiking Neural Network (SNN), as a brain-inspired approach, is attracting
attention due to its potential to produce ultra-high-energy-efficient hardware.
Competitive learning based on Spike-Timing-Dependent Plasticity (STDP) is a
popular method to train an unsupervised SNN. However, previous unsupervised
SNNs trained through this method are limited to a shallow network with only one
learnable layer and cannot achieve satisfactory results when compared with
multi-layer SNNs. In this paper, we eased this limitation by: 1)We proposed a
Spiking Inception (Sp-Inception) module, inspired by the Inception module in
the Artificial Neural Network (ANN) literature. This module is trained through
STDP-based competitive learning and outperforms the baseline modules on
learning capability, learning efficiency, and robustness. 2)We proposed a
Pooling-Reshape-Activate (PRA) layer to make the Sp-Inception module stackable.
3)We stacked multiple Sp-Inception modules to construct multi-layer SNNs. Our
algorithm outperforms the baseline algorithms on the hand-written digit
classification task, and reaches state-of-the-art results on the MNIST dataset
among the existing unsupervised SNNs.Comment: Published at the 2020 International Joint Conference on Neural
Networks (IJCNN); Extended from arXiv:2001.0168
Efficient Neuromorphic Computing Enabled by Spin-Transfer Torque: Devices, Circuits and Systems
Present day computers expend orders of magnitude more computational resources to perform various cognitive and perception related tasks that humans routinely perform everyday. This has recently resulted in a seismic shift in the field of computation where research efforts are being directed to develop a neurocomputer that attempts to mimic the human brain by nanoelectronic components and thereby harness its efficiency in recognition problems. Bridging the gap between neuroscience and nanoelectronics, this thesis demonstrates the encoding of biological neural and synaptic functionalities in the underlying physics of electron spin. Description of various spin-transfer torque mechanisms that can be potentially utilized for realizing neuro-mimetic device structures is provided. A cross-layer perspective extending from the device to the circuit and system level is presented to envision the design of an All-Spin neuromorphic processor enabled with on-chip learning functionalities. Device-circuit-algorithm co-simulation framework calibrated to experimental results suggest that such All-Spin neuromorphic systems can potentially achieve almost two orders of magnitude energy improvement in comparison to state-of-the-art CMOS implementations
Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with Spiking Neural Networks
Spiking neural networks (SNNs) have tremendous potential for energy-efficient
neuromorphic chips due to their binary and event-driven architecture. SNNs have
been primarily used in classification tasks, but limited exploration on image
generation tasks. To fill the gap, we propose a Spiking-Diffusion model, which
is based on the vector quantized discrete diffusion model. First, we develop a
vector quantized variational autoencoder with SNNs (VQ-SVAE) to learn a
discrete latent space for images. With VQ-SVAE, image features are encoded
using both the spike firing rate and postsynaptic potential, and an adaptive
spike generator is designed to restore embedding features in the form of spike
trains. Next, we perform absorbing state diffusion in the discrete latent space
and construct a diffusion image decoder with SNNs to denoise the image. Our
work is the first to build the diffusion model entirely from SNN layers.
Experimental results on MNIST, FMNIST, KMNIST, and Letters demonstrate that
Spiking-Diffusion outperforms the existing SNN-based generation model. We
achieve FIDs of 37.50, 91.98, 59.23 and 67.41 on the above datasets
respectively, with reductions of 58.60\%, 18.75\%, 64.51\%, and 29.75\% in FIDs
compared with the state-of-art work.Comment: Under Revie
Training Spiking Neural Networks Using Lessons From Deep Learning
The brain is the perfect place to look for inspiration to develop more
efficient neural networks. The inner workings of our synapses and neurons
provide a glimpse at what the future of deep learning might look like. This
paper serves as a tutorial and perspective showing how to apply the lessons
learnt from several decades of research in deep learning, gradient descent,
backpropagation and neuroscience to biologically plausible spiking neural
neural networks. We also explore the delicate interplay between encoding data
as spikes and the learning process; the challenges and solutions of applying
gradient-based learning to spiking neural networks; the subtle link between
temporal backpropagation and spike timing dependent plasticity, and how deep
learning might move towards biologically plausible online learning. Some ideas
are well accepted and commonly used amongst the neuromorphic engineering
community, while others are presented or justified for the first time here. A
series of companion interactive tutorials complementary to this paper using our
Python package, snnTorch, are also made available:
https://snntorch.readthedocs.io/en/latest/tutorials/index.htm
Neuromorphic Computing between Reality and Future Needs
Neuromorphic computing is a one of computer engineering methods that to model their elements as the human brain and nervous system. Many sciences as biology, mathematics, electronic engineering, computer science and physics have been integrated to construct artificial neural systems. In this chapter, the basics of Neuromorphic computing together with existing systems having the materials, devices, and circuits. The last part includes algorithms and applications in some fields