3 research outputs found
Semantic Image Retrieval With Feature Space Rankings
Learning to hash is receiving increasing research attention due to its effectiveness in addressing the large-scale similarity search problem. Most of the existing hashing algorithms are focused on learning hash functions in the form of numeric quantization of some projected feature space. In this work, we propose a novel hash learning method that encodes features\u27 relative ordering instead of quantizing their numeric values in a set of low-dimensional ranking subspaces. We formulate the ranking-based hash learning problem as the optimization of a continuous probabilistic error function using softmax approximation and present an efficient learning algorithm to solve the problem. As a generalization of Winner-Take-All (WTA) hashing, the proposed algorithm naturally enjoys the numeric stability benefits of rank correlation measures while being optimized to achieve high precision with very compact code. Additionally, the proposed method can also be easily extended to nonlinear kernel spaces to discover ranking structures that can not be revealed in linear subspaces. We demonstrate through extensive experiments that the proposed method can achive competitive performances as compared to a number of state-of-The-Art hashing methods
Supervised Ranking Hash For Semantic Similarity Search
The era of big data has spawned unprecedented interests in developing hashing algorithms for their storage efficiency and effectiveness in fast nearest neighbor search in large-scale databases. Most of the existing hash learning algorithms focus on learning hash functions which generate binary codes by numeric quantization of some projected feature space. In this work, we propose a novel hash learning framework that encodes features\u27 ranking orders instead of quantizing their numeric values in a number of optimal lowdimensional ranking subspaces. We formulate the rankingbased hash learning problem as the optimization of a continuous probabilistic error function using softmax approximation and present an efficient learning algorithm to solve the problem. Our work is a generalization of the Winner-Take-All (WTA) hashing algorithm and naturally enjoys the numeric stability benefits of rank correlation measures while being optimized to achieve high precision at extremely short code length. We extensively evaluate the proposed algorithm in several datasets and demonstrate superior performance against several state-of-the-arts
Semantic Image Retrieval with Feature Space Rankings
Learning to hash is receiving increasing research attention due to its effectiveness in addressing the large-scale similarity search problem. Most of the existing hashing algorithms are focused on learning hash functions in the form of numeric quantization of some projected feature space. In this work, we propose a novel hash learning method that encodes features\u27 relative ordering instead of quantizing their numeric values in a set of low-dimensional ranking subspaces. We formulate the ranking-based hash learning problem as the optimization of a continuous probabilistic error function using softmax approximation and present an efficient learning algorithm to solve the problem. As a generalization of Winner-Take-All (WTA) hashing, the proposed algorithm naturally enjoys the numeric stability benefits of rank correlation measures while being optimized to achieve high precision with very compact code. Additionally, the proposed method can also be easily extended to nonlinear kernel spaces to discover ranking structures that can not be revealed in linear subspaces. We demonstrate through extensive experiments that the proposed method can achive competitive performances as compared to a number of state-of-The-Art hashing methods