This paper explores the grounding issue regarding multimodal semantic
representation from a computational cognitive-linguistic view. We annotate
images from the Flickr30k dataset with five perceptual properties: Affordance,
Perceptual Salience, Object Number, Gaze Cueing, and Ecological Niche
Association (ENA), and examine their association with textual elements in the
image captions. Our findings reveal that images with Gibsonian affordance show
a higher frequency of captions containing 'holding-verbs' and 'container-nouns'
compared to images displaying telic affordance. Perceptual Salience, Object
Number, and ENA are also associated with the choice of linguistic expressions.
Our study demonstrates that comprehensive understanding of objects or events
requires cognitive attention, semantic nuances in language, and integration
across multiple modalities. We highlight the vital importance of situated
meaning and affordance grounding in natural language understanding, with the
potential to advance human-like interpretation in various scenarios.Comment: 10 pages, 9 figure