Facilitating online learning in spiking neural networks (SNNs) is a key step
in developing event-based models that can adapt to changing environments and
learn from continuous data streams in real-time. Although forward-mode
differentiation enables online learning, its computational requirements
restrict scalability. This is typically addressed through approximations that
limit learning in deep models. In this study, we propose Online Training with
Postsynaptic Estimates (OTPE) for training feed-forward SNNs, which
approximates Real-Time Recurrent Learning (RTRL) by incorporating temporal
dynamics not captured by current approximations, such as Online Training
Through Time (OTTT) and Online Spatio-Temporal Learning (OSTL). We show
improved scaling for multi-layer networks using a novel approximation of
temporal effects on the subsequent layer's activity. This approximation incurs
minimal overhead in the time and space complexity compared to similar
algorithms, and the calculation of temporal effects remains local to each
layer. We characterize the learning performance of our proposed algorithms on
multiple SNN model configurations for rate-based and time-based encoding. OTPE
exhibits the highest directional alignment to exact gradients, calculated with
backpropagation through time (BPTT), in deep networks and, on time-based
encoding, outperforms other approximate methods. We also observe sizeable gains
in average performance over similar algorithms in offline training of Spiking
Heidelberg Digits with equivalent hyper-parameters (OTTT/OSTL - 70.5%; OTPE -
75.2%; BPTT - 78.1%)