Contrastive Language-Image Pre-training (CLIP) has drawn increasing attention
recently for its transferable visual representation learning. However, due to
the semantic gap within datasets, CLIP's pre-trained image-text alignment
becomes sub-optimal on downstream tasks, which severely harms its transferring
performance. To better adapt the cross-modality embedding space, we propose to
enhance CLIP via Visual-guided Texts, named VT-CLIP. Specifically, we guide
textual features of different categories to adaptively explore informative
regions on the image and aggregate visual features by attention mechanisms. In
this way, the texts become visual-guided, namely, more semantically correlated
with downstream images, which greatly benefits the category-wise matching
process. In few-shot settings, we evaluate our VT-CLIP on 11 well-known
classification datasets to demonstrate its effectiveness