Vision transformers have shown great success due to their high model
capabilities. However, their remarkable performance is accompanied by heavy
computation costs, which makes them unsuitable for real-time applications. In
this paper, we propose a family of high-speed vision transformers named
EfficientViT. We find that the speed of existing transformer models is commonly
bounded by memory inefficient operations, especially the tensor reshaping and
element-wise functions in MHSA. Therefore, we design a new building block with
a sandwich layout, i.e., using a single memory-bound MHSA between efficient FFN
layers, which improves memory efficiency while enhancing channel communication.
Moreover, we discover that the attention maps share high similarities across
heads, leading to computational redundancy. To address this, we present a
cascaded group attention module feeding attention heads with different splits
of the full feature, which not only saves computation cost but also improves
attention diversity. Comprehensive experiments demonstrate EfficientViT
outperforms existing efficient models, striking a good trade-off between speed
and accuracy. For instance, our EfficientViT-M5 surpasses MobileNetV3-Large by
1.9% in accuracy, while getting 40.4% and 45.2% higher throughput on Nvidia
V100 GPU and Intel Xeon CPU, respectively. Compared to the recent efficient
model MobileViT-XXS, EfficientViT-M2 achieves 1.8% superior accuracy, while
running 5.8x/3.7x faster on the GPU/CPU, and 7.4x faster when converted to ONNX
format. Code and models are available at
https://github.com/microsoft/Cream/tree/main/EfficientViT.Comment: CVPR 202