Although vision transformers (ViTs) have shown promising results in various
computer vision tasks recently, their high computational cost limits their
practical applications. Previous approaches that prune redundant tokens have
demonstrated a good trade-off between performance and computation costs.
Nevertheless, errors caused by pruning strategies can lead to significant
information loss. Our quantitative experiments reveal that the impact of pruned
tokens on performance should be noticeable. To address this issue, we propose a
novel joint Token Pruning & Squeezing module (TPS) for compressing vision
transformers with higher efficiency. Firstly, TPS adopts pruning to get the
reserved and pruned subsets. Secondly, TPS squeezes the information of pruned
tokens into partial reserved tokens via the unidirectional nearest-neighbor
matching and similarity-based fusing steps. Compared to state-of-the-art
methods, our approach outperforms them under all token pruning intensities.
Especially while shrinking DeiT-tiny&small computational budgets to 35%, it
improves the accuracy by 1%-6% compared with baselines on ImageNet
classification. The proposed method can accelerate the throughput of DeiT-small
beyond DeiT-tiny, while its accuracy surpasses DeiT-tiny by 4.78%. Experiments
on various transformers demonstrate the effectiveness of our method, while
analysis experiments prove our higher robustness to the errors of the token
pruning policy. Code is available at
https://github.com/megvii-research/TPS-CVPR2023.Comment: Accepted to CVPR202