262 research outputs found
Coupled CycleGAN: Unsupervised Hashing Network for Cross-Modal Retrieval
In recent years, hashing has attracted more and more attention owing to its
superior capacity of low storage cost and high query efficiency in large-scale
cross-modal retrieval. Benefiting from deep leaning, continuously compelling
results in cross-modal retrieval community have been achieved. However,
existing deep cross-modal hashing methods either rely on amounts of labeled
information or have no ability to learn an accuracy correlation between
different modalities. In this paper, we proposed Unsupervised coupled Cycle
generative adversarial Hashing networks (UCH), for cross-modal retrieval, where
outer-cycle network is used to learn powerful common representation, and
inner-cycle network is explained to generate reliable hash codes. Specifically,
our proposed UCH seamlessly couples these two networks with generative
adversarial mechanism, which can be optimized simultaneously to learn
representation and hash codes. Extensive experiments on three popular benchmark
datasets show that the proposed UCH outperforms the state-of-the-art
unsupervised cross-modal hashing methods
Deep Lifelong Cross-modal Hashing
Hashing methods have made significant progress in cross-modal retrieval tasks
with fast query speed and low storage cost. Among them, deep learning-based
hashing achieves better performance on large-scale data due to its excellent
extraction and representation ability for nonlinear heterogeneous features.
However, there are still two main challenges in catastrophic forgetting when
data with new categories arrive continuously, and time-consuming for
non-continuous hashing retrieval to retrain for updating. To this end, we, in
this paper, propose a novel deep lifelong cross-modal hashing to achieve
lifelong hashing retrieval instead of re-training hash function repeatedly when
new data arrive. Specifically, we design lifelong learning strategy to update
hash functions by directly training the incremental data instead of retraining
new hash functions using all the accumulated data, which significantly reduce
training time. Then, we propose lifelong hashing loss to enable original hash
codes participate in lifelong learning but remain invariant, and further
preserve the similarity and dis-similarity among original and incremental hash
codes to maintain performance. Additionally, considering distribution
heterogeneity when new data arriving continuously, we introduce multi-label
semantic similarity to supervise hash learning, and it has been proven that the
similarity improves performance with detailed analysis. Experimental results on
benchmark datasets show that the proposed methods achieves comparative
performance comparing with recent state-of-the-art cross-modal hashing methods,
and it yields substantial average increments over 20\% in retrieval accuracy
and almost reduces over 80\% training time when new data arrives continuously
- …