Visual text evokes an image in a person's mind, while non-visual text fails
to do so. A method to automatically detect visualness in text will enable
text-to-image retrieval and generation models to augment text with relevant
images. This is particularly challenging with long-form text as text-to-image
generation and retrieval models are often triggered for text that is designed
to be explicitly visual in nature, whereas long-form text could contain many
non-visual sentences. To this end, we curate a dataset of 3,620 English
sentences and their visualness scores provided by multiple human annotators. We
also propose a fine-tuning strategy that adapts large vision-language models
like CLIP by modifying the model's contrastive learning objective to map text
identified as non-visual to a common NULL image while matching visual text to
their corresponding images in the document. We evaluate the proposed approach
on its ability to (i) classify visual and non-visual text accurately, and (ii)
attend over words that are identified as visual in psycholinguistic studies.
Empirical evaluation indicates that our approach performs better than several
heuristics and baseline models for the proposed task. Furthermore, to highlight
the importance of modeling the visualness of text, we conduct qualitative
analyses of text-to-image generation systems like DALL-E. Project webpage:
https://gaurav22verma.github.io/text-visualness/Comment: Accepted at EMNLP 2023 (Main, long); 9 pages, 5 figure