Since its inception, Vision Transformer (ViT) has emerged as a prevalent
model in the computer vision domain. Nonetheless, the multi-head self-attention
(MHSA) mechanism in ViT is computationally expensive due to its calculation of
relationships among all tokens. Although some techniques mitigate computational
overhead by discarding tokens, this also results in the loss of potential
information from those tokens. To tackle these issues, we propose a novel token
pruning method that retains information from non-crucial tokens by merging them
with more crucial tokens, thereby mitigating the impact of pruning on model
performance. Crucial and non-crucial tokens are identified by their importance
scores and merged based on similarity scores. Furthermore, multi-scale features
are exploited to represent images, which are fused prior to token pruning to
produce richer feature representations. Importantly, our method can be
seamlessly integrated with various ViTs, enhancing their adaptability.
Experimental evidence substantiates the efficacy of our approach in reducing
the influence of token pruning on model performance. For instance, on the
ImageNet dataset, it achieves a remarkable 33% reduction in computational costs
while only incurring a 0.1% decrease in accuracy on DeiT-S