1,417 research outputs found
Asymmetric Deep Supervised Hashing
Hashing has been widely used for large-scale approximate nearest neighbor
search because of its storage and search efficiency. Recent work has found that
deep supervised hashing can significantly outperform non-deep supervised
hashing in many applications. However, most existing deep supervised hashing
methods adopt a symmetric strategy to learn one deep hash function for both
query points and database (retrieval) points. The training of these symmetric
deep supervised hashing methods is typically time-consuming, which makes them
hard to effectively utilize the supervised information for cases with
large-scale database. In this paper, we propose a novel deep supervised hashing
method, called asymmetric deep supervised hashing (ADSH), for large-scale
nearest neighbor search. ADSH treats the query points and database points in an
asymmetric way. More specifically, ADSH learns a deep hash function only for
query points, while the hash codes for database points are directly learned.
The training of ADSH is much more efficient than that of traditional symmetric
deep supervised hashing methods. Experiments show that ADSH can achieve
state-of-the-art performance in real applications
SADIH: Semantic-Aware DIscrete Hashing
Due to its low storage cost and fast query speed, hashing has been recognized
to accomplish similarity search in large-scale multimedia retrieval
applications. Particularly supervised hashing has recently received
considerable research attention by leveraging the label information to preserve
the pairwise similarities of data points in the Hamming space. However, there
still remain two crucial bottlenecks: 1) the learning process of the full
pairwise similarity preservation is computationally unaffordable and unscalable
to deal with big data; 2) the available category information of data are not
well-explored to learn discriminative hash functions. To overcome these
challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH)
framework, which aims to directly embed the transformed semantic information
into the asymmetric similarity approximation and discriminative hashing
function learning. Specifically, a semantic-aware latent embedding is
introduced to asymmetrically preserve the full pairwise similarities while
skillfully handle the cumbersome n times n pairwise similarity matrix.
Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the
data structures in the discriminative latent semantic space and perform data
reconstruction. Moreover, an efficient alternating optimization algorithm is
proposed to solve the resulting discrete optimization problem. Extensive
experimental results on multiple large-scale datasets demonstrate that our
SADIH can clearly outperform the state-of-the-art baselines with the additional
benefit of lower computational costs.Comment: Accepted by The Thirty-Third AAAI Conference on Artificial
Intelligence (AAAI-19
Towards Optimal Discrete Online Hashing with Balanced Similarity
When facing large-scale image datasets, online hashing serves as a promising
solution for online retrieval and prediction tasks. It encodes the online
streaming data into compact binary codes, and simultaneously updates the hash
functions to renew codes of the existing dataset. To this end, the existing
methods update hash functions solely based on the new data batch, without
investigating the correlation between such new data and the existing dataset.
In addition, existing works update the hash functions using a relaxation
process in its corresponding approximated continuous space. And it remains as
an open problem to directly apply discrete optimizations in online hashing. In
this paper, we propose a novel supervised online hashing method, termed
Balanced Similarity for Online Discrete Hashing (BSODH), to solve the above
problems in a unified framework. BSODH employs a well-designed hashing
algorithm to preserve the similarity between the streaming data and the
existing dataset via an asymmetric graph regularization. We further identify
the "data-imbalance" problem brought by the constructed asymmetric graph, which
restricts the application of discrete optimization in our problem. Therefore, a
novel balanced similarity is further proposed, which uses two equilibrium
factors to balance the similar and dissimilar weights and eventually enables
the usage of discrete optimizations. Extensive experiments conducted on three
widely-used benchmarks demonstrate the advantages of the proposed method over
the state-of-the-art methods.Comment: 8 pages, 11 figures, conferenc
Compact Hash Codes for Efficient Visual Descriptors Retrieval in Large Scale Databases
In this paper we present an efficient method for visual descriptors retrieval
based on compact hash codes computed using a multiple k-means assignment. The
method has been applied to the problem of approximate nearest neighbor (ANN)
search of local and global visual content descriptors, and it has been tested
on different datasets: three large scale public datasets of up to one billion
descriptors (BIGANN) and, supported by recent progress in convolutional neural
networks (CNNs), also on the CIFAR-10 and MNIST datasets. Experimental results
show that, despite its simplicity, the proposed method obtains a very high
performance that makes it superior to more complex state-of-the-art methods
- …