Vision transformer has achieved impressive performance for many vision tasks.
However, it may suffer from high redundancy in capturing local features for
shallow layers. Local self-attention or early-stage convolutions are thus
utilized, which sacrifice the capacity to capture long-range dependency. A
challenge then arises: can we access efficient and effective global context
modeling at the early stages of a neural network? To address this issue, we
draw inspiration from the design of superpixels, which reduces the number of
image primitives in subsequent processing, and introduce super tokens into
vision transformer. Super tokens attempt to provide a semantically meaningful
tessellation of visual content, thus reducing the token number in
self-attention as well as preserving global modeling. Specifically, we propose
a simple yet strong super token attention (STA) mechanism with three steps: the
first samples super tokens from visual tokens via sparse association learning,
the second performs self-attention on super tokens, and the last maps them back
to the original token space. STA decomposes vanilla global attention into
multiplications of a sparse association map and a low-dimensional attention,
leading to high efficiency in capturing global dependencies. Based on STA, we
develop a hierarchical vision transformer. Extensive experiments demonstrate
its strong performance on various vision tasks. In particular, without any
extra training data or label, it achieves 86.4% top-1 accuracy on ImageNet-1K
with less than 100M parameters. It also achieves 53.9 box AP and 46.8 mask AP
on the COCO detection task, and 51.9 mIOU on the ADE20K semantic segmentation
task. Code will be released at https://github.com/hhb072/SViT.Comment: 12 pages, 4 figures, 8 table