Brain-inspired spiking neuron networks (SNNs) have attracted widespread
research interest due to their low power features, high biological
plausibility, and strong spatiotemporal information processing capability.
Although adopting a surrogate gradient (SG) makes the non-differentiability SNN
trainable, achieving comparable accuracy for ANNs and keeping low-power
features simultaneously is still tricky. In this paper, we proposed an
energy-efficient spike-train level spiking neural network (SLSSNN) with low
computational cost and high accuracy. In the SLSSNN, spatio-temporal conversion
blocks (STCBs) are applied to replace the convolutional and ReLU layers to keep
the low power features of SNNs and improve accuracy. However, SLSSNN cannot
adopt backpropagation algorithms directly due to the non-differentiability
nature of spike trains. We proposed a suitable learning rule for SLSSNNs by
deducing the equivalent gradient of STCB. We evaluate the proposed SLSSNN on
static and neuromorphic datasets, including Fashion-Mnist, Cifar10, Cifar100,
TinyImageNet, and DVS-Cifar10. The experiment results show that our proposed
SLSSNN outperforms the state-of-the-art accuracy on nearly all datasets, using
fewer time steps and being highly energy-efficient