3,603 research outputs found

    Deep Discrete Hashing with Self-supervised Pairwise Labels

    Full text link
    Hashing methods have been widely used for applications of large-scale image retrieval and classification. Non-deep hashing methods using handcrafted features have been significantly outperformed by deep hashing methods due to their better feature representation and end-to-end learning framework. However, the most striking successes in deep hashing have mostly involved discriminative models, which require labels. In this paper, we propose a novel unsupervised deep hashing method, named Deep Discrete Hashing (DDH), for large-scale image retrieval and classification. In the proposed framework, we address two main problems: 1) how to directly learn discrete binary codes? 2) how to equip the binary representation with the ability of accurate image retrieval and classification in an unsupervised way? We resolve these problems by introducing an intermediate variable and a loss function steering the learning process, which is based on the neighborhood structure in the original space. Experimental results on standard datasets (CIFAR-10, NUS-WIDE, and Oxford-17) demonstrate that our DDH significantly outperforms existing hashing methods by large margin in terms of~mAP for image retrieval and object recognition. Code is available at \url{https://github.com/htconquer/ddh}

    A Probabilistic Code Balance Constraint with Compactness and Informativeness Enhancement for Deep Supervised Hashing

    Full text link
    Building on deep representation learning, deep supervised hashing has achieved promising performance in tasks like similarity retrieval. However, conventional code balance constraints (i.e., bit balance and bit uncorrelation) imposed on avoiding overfitting and improving hash code quality are unsuitable for deep supervised hashing owing to their inefficiency and impracticality of simultaneously learning deep data representations and hash functions. To address this issue, we propose probabilistic code balance constraints on deep supervised hashing to force each hash code to conform to a discrete uniform distribution. Accordingly, a Wasserstein regularizer aligns the distribution of generated hash codes to a uniform distribution. Theoretical analyses reveal that the proposed constraints form a general deep hashing framework for both bit balance and bit uncorrelation and maximizing the mutual information between data input and their corresponding hash codes. Extensive empirical analyses on two benchmark datasets further demonstrate the enhancement of compactness and informativeness of hash codes for deep supervised hash to improve retrieval performance (code available at: https://github.com/mumuxi/dshwr).</jats:p

    Joint Image-Text Hashing for Fast Large-Scale Cross-Media Retrieval Using Self-Supervised Deep Learning

    Get PDF
    Recent years have witnessed the promising future of hashing in the industrial applications for fast similarity retrieval. In this paper, we propose a novel supervised hashing method for large-scale cross-media search, termed Self-Supervised Deep Multimodal Hashing (SSDMH), which learns unified hash codes as well as deep hash functions for different modalities in a self-supervised manner. With the proposed regularized binary latent model, unified binary codes can be solved directly without relaxation strategy while retaining the neighborhood structures by the graph regularization term. Moreover, we propose a new discrete optimization solution, termed as Binary Gradient Descent, which aims at improving the optimization efficiency towards real-time operation. Extensive experiments on three benchmark datasets demonstrate the superiority of SSDMH over state-of-the-art cross-media hashing approaches
    • …
    corecore