1,493 research outputs found
SADIH: Semantic-Aware DIscrete Hashing
Due to its low storage cost and fast query speed, hashing has been recognized
to accomplish similarity search in large-scale multimedia retrieval
applications. Particularly supervised hashing has recently received
considerable research attention by leveraging the label information to preserve
the pairwise similarities of data points in the Hamming space. However, there
still remain two crucial bottlenecks: 1) the learning process of the full
pairwise similarity preservation is computationally unaffordable and unscalable
to deal with big data; 2) the available category information of data are not
well-explored to learn discriminative hash functions. To overcome these
challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH)
framework, which aims to directly embed the transformed semantic information
into the asymmetric similarity approximation and discriminative hashing
function learning. Specifically, a semantic-aware latent embedding is
introduced to asymmetrically preserve the full pairwise similarities while
skillfully handle the cumbersome n times n pairwise similarity matrix.
Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the
data structures in the discriminative latent semantic space and perform data
reconstruction. Moreover, an efficient alternating optimization algorithm is
proposed to solve the resulting discrete optimization problem. Extensive
experimental results on multiple large-scale datasets demonstrate that our
SADIH can clearly outperform the state-of-the-art baselines with the additional
benefit of lower computational costs.Comment: Accepted by The Thirty-Third AAAI Conference on Artificial
Intelligence (AAAI-19
SUBIC: A supervised, structured binary code for image search
For large-scale visual search, highly compressed yet meaningful
representations of images are essential. Structured vector quantizers based on
product quantization and its variants are usually employed to achieve such
compression while minimizing the loss of accuracy. Yet, unlike binary hashing
schemes, these unsupervised methods have not yet benefited from the
supervision, end-to-end learning and novel architectures ushered in by the deep
learning revolution. We hence propose herein a novel method to make deep
convolutional neural networks produce supervised, compact, structured binary
codes for visual search. Our method makes use of a novel block-softmax
non-linearity and of batch-based entropy losses that together induce structure
in the learned encodings. We show that our method outperforms state-of-the-art
compact representations based on deep hashing or structured quantization in
single and cross-domain category retrieval, instance retrieval and
classification. We make our code and models publicly available online.Comment: Accepted at ICCV 2017 (Spotlight
- …