Article thumbnail

Learning to Rank with Graph Consistency

By Bo Geng, Linjun Yang and Xian-sheng Hua


The ranking models of existing image search engines are generally based on associated text while the image visual content is actually neglected. Imperfect search results fre-quently appear due to the mismatch between the textual features and the actual image content. Visual reranking, in which visual information is applied to refine text based search results, has been proven to be effective. However, the improvement brought by visual reranking is limited, and the main reason is that the errors in the text-based results will propagate to the refinement stage. In this paper, we pro-pose a Content-Aware Ranking model based on “learning to rank ” framework, in which textual and visual informa-tion are simultaneously leveraged in the ranking learning process. We formulate the Content-Aware Ranking learn-ing based on large margin structured output learning, by modeling the visual information into a regularization term. The direct optimization of the learning problem is nearly infeasible since the number of constraints is huge. The effi-cient cutting plane algorithm is adopted to learn the model by iteratively adding the most violated constraints. Exten-sive experimental results on a large-scale dataset collected from a commercial Web image search engine demonstrate that the proposed ranking model significantly outperforms the state-of-the-art ranking and reranking methods

Topics: Categories and Subject Descriptors H.3.3 [Information Search and Retrieval, Retrieval models General Terms Algorithms, Theory, Experimentation, Performance. Keywords Learning to Rank, Image Search Reranking, Content-Aware Ranking Model
Year: 2016
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.