Image retrieval, using either content or text-based techniques, does
not match up to the current quality of standard text retrieval. One possible
reason for this mismatch is the semantic gap – the terms by which images are
indexed do not accord with those imagined by users querying image databases.
In this paper we set out to describe how geography might help to index the
where facet of the Pansofsky-Shatford matrix, which has previously been
shown to accord well with the types of queries users make. We illustrate these
ideas with existing (e.g. identifying place names associated with a set of
coordinates) and novel (e.g. describing images using land cover data)
techniques to describe images and contend that such methods will become
central as increasing numbers of images become georeferenced
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.