Query-by-Example Image Retrieval using Visual Dependency Representations

Abstract

Image retrieval models typically represent images as bags-of-terms, a representation that is well-suited to matching images based on the presence or absence of terms. For some information needs, such as searching for images of people performing actions, it may be useful to retain data about how parts of an image relate to each other. If the underlying representation of an image can distinguish between images where objects only co-occur from images where people are in-teracting with objects, then it should be possible to improve retrieval performance. In this paper we model the spatial relationships between image regions using Visual Dependency Represen-tations, a structured image representation that makes it possible to distinguish between object co-occurrence and interaction. In a query-by-example image retrieval experiment on data set of people performing actions, we find an 8.8 % relative increase in MAP and an 8.6 % relative increase in Precision@10 when images are represented using the Visual Dependency Represen-tation compared to a bag-of-terms baseline.

    Similar works