Skip to main content
Article thumbnail
Location of Repository

UNIFIED HYPERGRAPH FOR IMAGE RANKING IN A MULTIMODAL CONTEXT

By Jiejun Xu, Vishwakarma Singh, Ziyu Guan and B. S. Manjunath

Abstract

Image ranking has long been studied, yet it remains a very challenging problem. Increasingly, online images come with additional metadata such as user annotations and geographic coordinates. They provide rich complementary information. We propose to combine such multimodal information through a unified hypergraph to improve image retrieval performance. Hypergraphs allow for the simultaneously capture of higher order relationships among images using different modalities, e.g. visual content, user tags, and geolocations. Each image is represented as a vertex in the hypergraph. Each hyperedge is formed by a vertex and it’s k-nearest neighbors. Three types of hyperedges exist in our unified hypergraph, which are in correspondence to the three different modalities. Image ranking is then formulated as a ranking problem on a unified hypergraph. The proposed method can easily be extended to incorporate additional modalities as long as a similarity function exists to compare the features. Experimental results on large datasets are promising. Index Terms — Multimodal, Hypergraph, Image Retrieval 1

Year: 2014
OAI identifier: oai:CiteSeerX.psu:10.1.1.418.6569
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://vision.ece.ucsb.edu/pub... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.