The attention mechanism has been proven to be an effective way to improve
spiking neural network (SNN). However, based on the fact that the current SNN
input data flow is split into tensors to process on GPUs, none of the previous
works consider the properties of tensors to implement an attention module. This
inspires us to rethink current SNN from the perspective of tensor-relevant
theories. Using tensor decomposition, we design the \textit{projected full
attention} (PFA) module, which demonstrates excellent results with linearly
growing parameters. Specifically, PFA is composed by the \textit{linear
projection of spike tensor} (LPST) module and \textit{attention map composing}
(AMC) module. In LPST, we start by compressing the original spike tensor into
three projected tensors using a single property-preserving strategy with
learnable parameters for each dimension. Then, in AMC, we exploit the inverse
procedure of the tensor decomposition process to combine the three tensors into
the attention map using a so-called connecting factor. To validate the
effectiveness of the proposed PFA module, we integrate it into the widely used
VGG and ResNet architectures for classification tasks. Our method achieves
state-of-the-art performance on both static and dynamic benchmark datasets,
surpassing the existing SNN models with Transformer-based and CNN-based
backbones.Comment: 11 page