6,966 research outputs found
Graph Regularized Non-negative Matrix Factorization By Maximizing Correntropy
Non-negative matrix factorization (NMF) has proved effective in many
clustering and classification tasks. The classic ways to measure the errors
between the original and the reconstructed matrix are distance or
Kullback-Leibler (KL) divergence. However, nonlinear cases are not properly
handled when we use these error measures. As a consequence, alternative
measures based on nonlinear kernels, such as correntropy, are proposed.
However, the current correntropy-based NMF only targets on the low-level
features without considering the intrinsic geometrical distribution of data. In
this paper, we propose a new NMF algorithm that preserves local invariance by
adding graph regularization into the process of max-correntropy-based matrix
factorization. Meanwhile, each feature can learn corresponding kernel from the
data. The experiment results of Caltech101 and Caltech256 show the benefits of
such combination against other NMF algorithms for the unsupervised image
clustering
Deep Metric Learning Assisted by Intra-variance in A Semi-supervised View of Learning
Deep metric learning aims to construct an embedding space where samples of
the same class are close to each other, while samples of different classes are
far away from each other. Most existing deep metric learning methods attempt to
maximize the difference of inter-class features. And semantic related
information is obtained by increasing the distance between samples of different
classes in the embedding space. However, compressing all positive samples
together while creating large margins between different classes unconsciously
destroys the local structure between similar samples. Ignoring the intra-class
variance contained in the local structure between similar samples, the
embedding space obtained from training receives lower generalizability over
unseen classes, which would lead to the network overfitting the training set
and crashing on the test set. To address these considerations, this paper
designs a self-supervised generative assisted ranking framework that provides a
semi-supervised view of intra-class variance learning scheme for typical
supervised deep metric learning. Specifically, this paper performs sample
synthesis with different intensities and diversity for samples satisfying
certain conditions to simulate the complex transformation of intra-class
samples. And an intra-class ranking loss function is designed using the idea of
self-supervised learning to constrain the network to maintain the intra-class
distribution during the training process to capture the subtle intra-class
variance. With this approach, a more realistic embedding space can be obtained
in which global and local structures of samples are well preserved, thus
enhancing the effectiveness of downstream tasks. Extensive experiments on four
benchmarks have shown that this approach surpasses state-of-the-art method
- …