Deemed as the third generation of neural networks, the event-driven Spiking
Neural Networks(SNNs) combined with bio-plausible local learning rules make it
promising to build low-power, neuromorphic hardware for SNNs. However, because
of the non-linearity and discrete property of spiking neural networks, the
training of SNN remains difficult and is still under discussion. Originating
from gradient descent, backprop has achieved stunning success in multi-layer
SNNs. Nevertheless, it is assumed to lack biological plausibility, while
consuming relatively high computational resources. In this paper, we propose a
novel learning algorithm inspired by predictive coding theory and show that it
can perform supervised learning fully autonomously and successfully as the
backprop, utilizing only local Hebbian plasticity. Furthermore, this method
achieves a favorable performance compared to the state-of-the-art multi-layer
SNNs: test accuracy of 99.25% for the Caltech Face/Motorbike dataset, 84.25%
for the ETH-80 dataset, 98.1% for the MNIST dataset and 98.5% for the
neuromorphic dataset: N-MNIST. Furthermore, our work provides a new perspective
on how supervised learning algorithms are directly implemented in spiking
neural circuitry, which may give some new insights into neuromorphological
calculation in neuroscience.Comment: 15 pages, 11fig