For computer vision, Vision Transformers (ViTs) have become one of the go-to
deep net architectures. Despite being inspired by Convolutional Neural Networks
(CNNs), ViTs' output remains sensitive to small spatial shifts in the input,
i.e., not shift invariant. To address this shortcoming, we introduce novel
data-adaptive designs for each of the modules in ViTs, such as tokenization,
self-attention, patch merging, and positional encoding. With our proposed
modules, we achieve true shift-equivariance on four well-established ViTs,
namely, Swin, SwinV2, CvT, and MViTv2. Empirically, we evaluate the proposed
adaptive models on image classification and semantic segmentation tasks. These
models achieve competitive performance across three different datasets while
maintaining 100% shift consistency