Vision-language pre-training models (VLPs) have exhibited revolutionary
improvements in various vision-language tasks. In VLP, some adversarial attacks
fool a model into false or absurd classifications. Previous studies addressed
these attacks by fine-tuning the model or changing its architecture. However,
these methods risk losing the original model's performance and are difficult to
apply to downstream tasks. In particular, their applicability to other tasks
has not been considered. In this study, we addressed the reduction of the
impact of typographic attacks on CLIP without changing the model parameters. To
achieve this, we expand the idea of ``prefix learning'' and introduce our
simple yet effective method: Defense-Prefix (DP), which inserts the DP token
before a class name to make words ``robust'' against typographic attacks. Our
method can be easily applied to downstream tasks, such as object detection,
because the proposed method is independent of the model parameters. Our method
significantly improves the accuracy of classification tasks for typographic
attack datasets, while maintaining the zero-shot capabilities of the model. In
addition, we leverage our proposed method for object detection, demonstrating
its high applicability and effectiveness. The codes and datasets will be
publicly available.Comment: Under revie