Masked image modeling (MIM), an emerging self-supervised pre-training method,
has shown impressive success across numerous downstream vision tasks with
Vision transformers (ViTs). Its underlying idea is simple: a portion of the
input image is randomly masked out and then reconstructed via the pre-text
task. However, the working principle behind MIM is not well explained, and
previous studies insist that MIM primarily works for the Transformer family but
is incompatible with CNNs. In this paper, we first study interactions among
patches to understand what knowledge is learned and how it is acquired via the
MIM task. We observe that MIM essentially teaches the model to learn better
middle-order interactions among patches and extract more generalized features.
Based on this fact, we propose an Architecture-Agnostic Masked Image Modeling
framework (A2MIM), which is compatible with both Transformers and CNNs in a
unified way. Extensive experiments on popular benchmarks show that our A2MIM
learns better representations without explicit design and endows the backbone
model with the stronger capability to transfer to various downstream tasks for
both Transformers and CNNs.Comment: Preprint under review (update reversion). The source code will be
released in https://github.com/Westlake-AI/openmixu