568 research outputs found

    Latent Structure Preserving Hashing

    Get PDF
    Aiming at efficient similarity search, hash functions are designed to embed high-dimensional feature descriptors to low-dimensional binary codes such that similar descriptors will lead to binary codes with a short distance in the Hamming space. It is critical to effectively maintain the intrinsic structure and preserve the original information of data in a hashing algorithm. In this paper, we propose a novel hashing algorithm called Latent Structure Preserving Hashing (LSPH), with the target of finding a well-structured low-dimensional data representation from the original high-dimensional data through a novel objective function based on Nonnegative Matrix Factorization (NMF) with their corresponding Kullback-Leibler divergence of data distribution as the regularization term. Via exploiting the joint probabilistic distribution of data, LSPH can automatically learn the latent information and successfully preserve the structure of high-dimensional data. To further achieve robust performance with complex and nonlinear data, in this paper, we also contribute a more generalized multi-layer LSPH (ML-LSPH) framework, in which hierarchical representations can be effectively learned by a multiplicative up-propagation algorithm. Once obtaining the latent representations, the hash functions can be easily acquired through multi-variable logistic regression. Experimental results on three large-scale retrieval datasets, i.e., SIFT 1M, GIST 1M and 500 K TinyImage, show that ML-LSPH can achieve better performance than the single-layer LSPH and both of them outperform existing hashing techniques on large-scale data

    Graph Regularized Non-negative Matrix Factorization By Maximizing Correntropy

    Full text link
    Non-negative matrix factorization (NMF) has proved effective in many clustering and classification tasks. The classic ways to measure the errors between the original and the reconstructed matrix are l2l_2 distance or Kullback-Leibler (KL) divergence. However, nonlinear cases are not properly handled when we use these error measures. As a consequence, alternative measures based on nonlinear kernels, such as correntropy, are proposed. However, the current correntropy-based NMF only targets on the low-level features without considering the intrinsic geometrical distribution of data. In this paper, we propose a new NMF algorithm that preserves local invariance by adding graph regularization into the process of max-correntropy-based matrix factorization. Meanwhile, each feature can learn corresponding kernel from the data. The experiment results of Caltech101 and Caltech256 show the benefits of such combination against other NMF algorithms for the unsupervised image clustering

    ROBUST NONNEGATIVE MATRIX FACTORIZATION WITH DISCRIMINABILITY FOR IMAGE REPRESENTATION

    Get PDF
    ABSTRACT Due to its psychological and physiological interpretation of naturally occurring data, Nonnegative Matrix Factorization (NMF) has attracted considerable attention for learning effective representation for images. And its graph-regularized extensions have shown promising results by exploiting the low dimensional manifold structure of data. Actually, their performance can be further improved because they still suffer from several important problems, i.e., sensitivity to noise in data, trivial solution problem, and ignoring the discriminative information. In this paper, we propose a novel method, referred to as Robust Nonnegative Matrix Factorization with Discriminability (RNMFD), for image representation, which can effectively and simultaneously cope with problems mentioned above by imposing a sparse noise matrix for data reconstruction and approximate orthogonal constraints. We carried out extensive experiments on five benchmark image datasets and the results demonstrate the superiority of our RNMFD in comparison with several state-of-the-art methods
    • …
    corecore