Modern image-to-text systems typically adopt the encoder-decoder framework,
which comprises two main components: an image encoder, responsible for
extracting image features, and a transformer-based decoder, used for generating
captions. Taking inspiration from the analysis of neural networks' robustness
against adversarial perturbations, we propose a novel gray-box algorithm for
creating adversarial examples in image-to-text models. Unlike image
classification tasks that have a finite set of class labels, finding visually
similar adversarial examples in an image-to-text task poses greater challenges
because the captioning system allows for a virtually infinite space of possible
captions. In this paper, we present a gray-box adversarial attack on
image-to-text, both untargeted and targeted. We formulate the process of
discovering adversarial perturbations as an optimization problem that uses only
the image-encoder component, meaning the proposed attack is language-model
agnostic. Through experiments conducted on the ViT-GPT2 model, which is the
most-used image-to-text model in Hugging Face, and the Flickr30k dataset, we
demonstrate that our proposed attack successfully generates visually similar
adversarial examples, both with untargeted and targeted captions. Notably, our
attack operates in a gray-box manner, requiring no knowledge about the decoder
module. We also show that our attacks fool the popular open-source platform
Hugging Face