Vision Transformer (ViT) based Vision-Language Pre-training (VLP) models have
demonstrated impressive performance in various tasks. However, the lengthy
visual token sequences fed into ViT can lead to training inefficiency and
ineffectiveness. Existing efforts address the challenge by either bottom-level
patch extraction in the ViT backbone or top-level patch abstraction outside,
not balancing training efficiency and effectiveness well. Inspired by text
summarization in natural language processing, we propose a Bottom-Up Patch
Summarization approach named BUS, coordinating bottom-level extraction and
top-level abstraction to learn a concise summary of lengthy visual token
sequences efficiently. Specifically, We incorporate a Text-Semantics-Aware
Patch Selector (TSPS) into the ViT backbone to perform a coarse-grained visual
token extraction and then attach a flexible Transformer-based Patch Abstraction
Decoder (PAD) upon the backbone for top-level visual abstraction. This
bottom-up collaboration enables our BUS to yield high training efficiency while
maintaining or even improving effectiveness. We evaluate our approach on
various visual-language understanding and generation tasks and show competitive
downstream task performance while boosting the training efficiency by 50\%.
Additionally, our model achieves state-of-the-art performance on many
downstream tasks by increasing input image resolution without increasing
computational costs over baselines.Comment: Accepted on ICCV202