As an emerging network model, spiking neural networks (SNNs) have aroused
significant research attentions in recent years. However, the energy-efficient
binary spikes do not augur well with gradient descent-based training
approaches. Surrogate gradient (SG) strategy is investigated and applied to
circumvent this issue and train SNNs from scratch. Due to the lack of
well-recognized SG selection rule, most SGs are chosen intuitively. We propose
the parametric surrogate gradient (PSG) method to iteratively update SG and
eventually determine an optimal surrogate gradient parameter, which calibrates
the shape of candidate SGs. In SNNs, neural potential distribution tends to
deviate unpredictably due to quantization error. We evaluate such potential
shift and propose methodology for potential distribution adjustment (PDA) to
minimize the loss of undesired pre-activations. Experimental results
demonstrate that the proposed methods can be readily integrated with
backpropagation through time (BPTT) algorithm and help modulated SNNs to
achieve state-of-the-art performance on both static and dynamic dataset with
fewer timesteps.Comment: 10 pages, 8 figure