1 research outputs found

    Improving the Detection of Relations Between Objects in an Image Using Textual Semantics

    No full text
    In this article, we describe a system that classifies relations between entities extracted from an image. We started from the idea that we could utilize lexical and semantic information from text associated with the image, such as captions or surrounding text, rather than just the geometric and visual characteristics of the entities found in the image. We collected a corpus of images from Wikipedia together with their corresponding articles. In our experimental setup, we extracted two kinds of entities from the images, human beings and horses, and we defined three relations that could exist between them: Ride, Lead,or None. We used geometric features as a baseline to identify the relations between the entities and we describe the improvements brought by the addition of bag-of-word features and predicate–argument structures that we extracted from the text. The best semantic model resulted in a relative error reduction of more than 18 % over the baselin
    corecore