11,054 research outputs found
Supervised deep semantics-preserving hashing for real-time pulmonary nodule image retrieval
Hashing-based medical image retrieval has drawn extensive attention recently, which aims at providing effective aided diagnosis for medical personnel. In the paper, a novel deep hashing framework is proposed in the medical image retrieval, where the processes of deep feature extraction, binary code learning, and deep hash function learning are jointly carried out in supervised fashion. Particularly, the discrete constrained objective function in the hash code learning is optimized iteratively, where the binary code can be directly solved with no need for relaxation. In the meantime, the semantic similarity is maintained by fully exploring supervision information during the discrete optimization, where the neighborhood structure of training data is preserved by applying a graph regularization term. Additionally, to gain the fine-grained ranking of the returned medical images sharing the same Hamming distance, a novel image re-ranking scheme is proposed to refine the similarity measurement by jointly considering Euclidean distance between the real-valued feature descriptors and their category information between those images. Extensive experiments on the pulmonary nodule image dataset demonstrate that the proposed method can achieve better retrieval performance over the state of the arts
Ranking-based Deep Cross-modal Hashing
Cross-modal hashing has been receiving increasing interests for its low
storage cost and fast query speed in multi-modal data retrievals. However, most
existing hashing methods are based on hand-crafted or raw level features of
objects, which may not be optimally compatible with the coding process.
Besides, these hashing methods are mainly designed to handle simple pairwise
similarity. The complex multilevel ranking semantic structure of instances
associated with multiple labels has not been well explored yet. In this paper,
we propose a ranking-based deep cross-modal hashing approach (RDCMH). RDCMH
firstly uses the feature and label information of data to derive a
semi-supervised semantic ranking list. Next, to expand the semantic
representation power of hand-crafted features, RDCMH integrates the semantic
ranking information into deep cross-modal hashing and jointly optimizes the
compatible parameters of deep feature representations and of hashing functions.
Experiments on real multi-modal datasets show that RDCMH outperforms other
competitive baselines and achieves the state-of-the-art performance in
cross-modal retrieval applications
Deep Discrete Hashing with Self-supervised Pairwise Labels
Hashing methods have been widely used for applications of large-scale image
retrieval and classification. Non-deep hashing methods using handcrafted
features have been significantly outperformed by deep hashing methods due to
their better feature representation and end-to-end learning framework. However,
the most striking successes in deep hashing have mostly involved discriminative
models, which require labels. In this paper, we propose a novel unsupervised
deep hashing method, named Deep Discrete Hashing (DDH), for large-scale image
retrieval and classification. In the proposed framework, we address two main
problems: 1) how to directly learn discrete binary codes? 2) how to equip the
binary representation with the ability of accurate image retrieval and
classification in an unsupervised way? We resolve these problems by introducing
an intermediate variable and a loss function steering the learning process,
which is based on the neighborhood structure in the original space.
Experimental results on standard datasets (CIFAR-10, NUS-WIDE, and Oxford-17)
demonstrate that our DDH significantly outperforms existing hashing methods by
large margin in terms of~mAP for image retrieval and object recognition. Code
is available at \url{https://github.com/htconquer/ddh}
- …