24 research outputs found

    A relaxation method for binary orthogonal optimization problems with its applications

    Full text link
    This paper focuses on a class of binary orthogonal optimization problems frequently arising in semantic hashing. Consider that this class of problems may have an empty feasible set, rendering them not well-defined. We introduce an equivalent model involving a restricted Stiefel manifold and a matrix box set, and then investigate its penalty problems induced by the â„“1\ell_1-distance from the box set and its Moreau envelope. The two penalty problems are always well-defined, and moreover, they serve as the global exact penalties provided that the original model is well-defined. Notably, the penalty problem induced by the Moreau envelope is a smooth optimization over an embedded submanifold with a favorable structure. We develop a retraction-based nonmonotone line-search Riemannian gradient method to address this penalty problem to achieve a desirable solution for the original binary orthogonal problems. Finally, the proposed method is applied to supervised and unsupervised hashing tasks and is compared with several popular methods on the MNIST and CIFAR-10 datasets. The numerical comparisons reveal that our algorithm is significantly superior to other solvers in terms of feasibility violation, and it is comparable even superior to others in terms of evaluation metrics related to the Hamming distance.Comment: Binary orthogonal optimization problems, global exact penalty, relaxation methods, semantic hashin

    SADIH: Semantic-Aware DIscrete Hashing

    Full text link
    Due to its low storage cost and fast query speed, hashing has been recognized to accomplish similarity search in large-scale multimedia retrieval applications. Particularly supervised hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative hashing function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n times n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.Comment: Accepted by The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19
    corecore