1 research outputs found

    Semantics-Reconstructing Hashing for Cross-Modal Retrieval

    No full text
    Retrieval on Cross-modal data has attracted extensive attention as it enables fast searching across various data sources, such as texts, images and videos. As one of the typical techniques for cross-model searching, hashing methods project features with high dimension into short-length hash codes, thus effectively improving storage and retrieval efficiency. Recently, many efforts have been made to widely study supervised methods with promising performance. However, there still remain some problems. Conventionally, hash codes and projection functions are learnt by preserving the pairwise similarities between data items, which neglects the discriminative property of class associated with each data item. Most of the existing methods that utilise class labels also undertake the binary codes learning under a classification frame. The relations between binary codes and labels have not been well considered. To tackle these problems, we propose a shallow supervised hash learning method – Semantics-reconstructing Cross-modal Hashing (SCH), which reconstructs semantic representation and learns the hash codes for the entire dataset jointly. For the semantic reconstruction, the learned semantic representation is projected back into label space, extracting more semantic information. By leveraging reconstructed semantic representations, the hash codes are learnt by considering the underlying correlations between labels, hash codes and original features, resulting in a further performance improvement. Moreover, SCH learns the hash codes and functions without relaxing the binary constraints simultaneously, therefore, it further reduces the quantization errors. In addition, the linear computational complexity of its training makes it practicable to big data. Extensive experiments show that the proposed SCH can perform better than the state-of-the-art baselines
    corecore