3 research outputs found

    Large scale near-duplicate image retrieval using Triples of Adjacent Ranked Features (TARF) with embedded geometric information

    Full text link
    Most approaches to large-scale image retrieval are based on the construction of the inverted index of local image descriptors or visual words. A search in such an index usually results in a large number of candidates. This list of candidates is then re-ranked with the help of a geometric verification, using a RANSAC algorithm, for example. In this paper we propose a feature representation, which is built as a combination of three local descriptors. It allows one to significantly decrease the number of false matches and to shorten the list of candidates after the initial search in the inverted index. This combination of local descriptors is both reproducible and highly discriminative, and thus can be efficiently used for large-scale near-duplicate image retrieval

    Cluster-wise Unsupervised Hashing for Cross-Modal Similarity Search

    Full text link
    Large-scale cross-modal hashing similarity retrieval has attracted more and more attention in modern search applications such as search engines and autopilot, showing great superiority in computation and storage. However, current unsupervised cross-modal hashing methods still have some limitations: (1)many methods relax the discrete constraints to solve the optimization objective which may significantly degrade the retrieval performance;(2)most existing hashing model project heterogenous data into a common latent space, which may always lose sight of diversity in heterogenous data;(3)transforming real-valued data point to binary codes always results in abundant loss of information, producing the suboptimal continuous latent space. To overcome above problems, in this paper, a novel Cluster-wise Unsupervised Hashing (CUH) method is proposed. Specifically, CUH jointly performs the multi-view clustering that projects the original data points from different modalities into its own low-dimensional latent semantic space and finds the cluster centroid points and the common clustering indicators in its own low-dimensional space, and learns the compact hash codes and the corresponding linear hash functions. An discrete optimization framework is developed to learn the unified binary codes across modalities under the guidance cluster-wise code-prototypes. The reasonableness and effectiveness of CUH is well demonstrated by comprehensive experiments on diverse benchmark datasets.Comment: 13 pages, 26 figure

    Asymmetric Correlation Quantization Hashing for Cross-modal Retrieval

    Full text link
    Due to the superiority in similarity computation and database storage for large-scale multiple modalities data, cross-modal hashing methods have attracted extensive attention in similarity retrieval across the heterogeneous modalities. However, there are still some limitations to be further taken into account: (1) most current CMH methods transform real-valued data points into discrete compact binary codes under the binary constraints, limiting the capability of representation for original data on account of abundant loss of information and producing suboptimal hash codes; (2) the discrete binary constraint learning model is hard to solve, where the retrieval performance may greatly reduce by relaxing the binary constraints for large quantization error; (3) handling the learning problem of CMH in a symmetric framework, leading to difficult and complex optimization objective. To address above challenges, in this paper, a novel Asymmetric Correlation Quantization Hashing (ACQH) method is proposed. Specifically, ACQH learns the projection matrixs of heterogeneous modalities data points for transforming query into a low-dimensional real-valued vector in latent semantic space and constructs the stacked compositional quantization embedding in a coarse-to-fine manner for indicating database points by a series of learnt real-valued codeword in the codebook with the help of pointwise label information regression simultaneously. Besides, the unified hash codes across modalities can be directly obtained by the discrete iterative optimization framework devised in the paper. Comprehensive experiments on diverse three benchmark datasets have shown the effectiveness and rationality of ACQH.Comment: 12 page
    corecore