Image retrieval, using either content or text-based techniques, does\ud not match up to the current quality of standard text retrieval. One possible\ud reason for this mismatch is the semantic gap – the terms by which images are\ud indexed do not accord with those imagined by users querying image databases.\ud In this paper we set out to describe how geography might help to index the\ud where facet of the Pansofsky-Shatford matrix, which has previously been\ud shown to accord well with the types of queries users make. We illustrate these\ud ideas with existing (e.g. identifying place names associated with a set of\ud coordinates) and novel (e.g. describing images using land cover data)\ud techniques to describe images and contend that such methods will become\ud central as increasing numbers of images become georeferenced
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.