1,815 research outputs found

    Ranking-based Deep Cross-modal Hashing

    Full text link
    Cross-modal hashing has been receiving increasing interests for its low storage cost and fast query speed in multi-modal data retrievals. However, most existing hashing methods are based on hand-crafted or raw level features of objects, which may not be optimally compatible with the coding process. Besides, these hashing methods are mainly designed to handle simple pairwise similarity. The complex multilevel ranking semantic structure of instances associated with multiple labels has not been well explored yet. In this paper, we propose a ranking-based deep cross-modal hashing approach (RDCMH). RDCMH firstly uses the feature and label information of data to derive a semi-supervised semantic ranking list. Next, to expand the semantic representation power of hand-crafted features, RDCMH integrates the semantic ranking information into deep cross-modal hashing and jointly optimizes the compatible parameters of deep feature representations and of hashing functions. Experiments on real multi-modal datasets show that RDCMH outperforms other competitive baselines and achieves the state-of-the-art performance in cross-modal retrieval applications

    Cross-modal Hashing with Semantic Deep Embedding

    Get PDF
    Cross-modal hashing has demonstrated advantages on fast retrieval tasks. It improves the quality of hash coding by exploiting semantic correlation across different modalities. In supervised cross-modal hashing, the learning of hash function replies on the quality of extracted features, for which deep learning models have been adopted to replace the traditional models based on handcraft features. All deep methods, however, have not sufficiently explored semantic correlation of modalities for the hashing process. In this paper, we introduce a novel end-to-end deep cross-modal hashing framework which integrates feature and hash-code learning into the same network. We take both between and within modalities data correlation into consideration, and propose a novel network structure and a loss function with dual semantic supervision for hash learning. This method ensures that the generated binary codes keep the semantic relationship of the original data points. Cross-modal retrieval experiments on commonly used benchmark datasets show that our method yields substantial performance improvement over several state-of-the-art hashing methods

    MESH : a flexible manifold-embedded semantic hashing for cross-modal retrieval

    Get PDF
    Hashing based methods for cross-modal retrieval has been widely explored in recent years. However, most of them mainly focus on the preservation of neighborhood relationship and label consistency, while ignore the proximity of neighbors and proximity of classes, which degrades the discrimination of hash codes. And most of them learn hash codes and hashing functions simultaneously, which limits the flexibility of algorithms. To address these issues, in this article, we propose a two-step cross-modal retrieval method named Manifold-Embedded Semantic Hashing (MESH). It exploits Local Linear Embedding to model the neighborhood proximity and uses class semantic embeddings to consider the proximity of classes. By so doing, MESH can not only extract the manifold structure in different modalities, but also can embed the class semantic information into hash codes to further improve the discrimination of learned hash codes. Moreover, the two-step scheme makes MESH flexible to various hashing functions. Extensive experimental results on three datasets show that MESH is superior to 10 state-of-the-art cross-modal hashing methods. Moreover, MESH also demonstrates superiority on deep features compared with the deep cross-modal hashing method. © 2013 IEEE
    • …
    corecore