Multimodal embeddings aim to enrich the semantic information in neural
representations of language compared to text-only models. While different
embeddings exhibit different applicability and performance on downstream tasks,
little is known about the systematic representation differences attributed to
the visual modality. Our paper compares word embeddings from three
vision-and-language models (CLIP, OpenCLIP and Multilingual CLIP) and three
text-only models, with static (FastText) as well as contextual representations
(multilingual BERT; XLM-RoBERTa). This is the first large-scale study of the
effect of visual grounding on language representations, including 46 semantic
parameters. We identify meaning properties and relations that characterize
words whose embeddings are most affected by the inclusion of visual modality in
the training data; that is, points where visual grounding turns out most
important. We find that the effect of visual modality correlates most with
denotational semantic properties related to concreteness, but is also detected
for several specific semantic classes, as well as for valence, a
sentiment-related connotational property of linguistic expressions.Comment: Accepted for StarSEM 202