24,232 research outputs found
Robust Subspace Clustering via Smoothed Rank Approximation
Matrix rank minimizing subject to affine constraints arises in many
application areas, ranging from signal processing to machine learning. Nuclear
norm is a convex relaxation for this problem which can recover the rank exactly
under some restricted and theoretically interesting conditions. However, for
many real-world applications, nuclear norm approximation to the rank function
can only produce a result far from the optimum. To seek a solution of higher
accuracy than the nuclear norm, in this paper, we propose a rank approximation
based on Logarithm-Determinant. We consider using this rank approximation for
subspace clustering application. Our framework can model different kinds of
errors and noise. Effective optimization strategy is developed with theoretical
guarantee to converge to a stationary point. The proposed method gives
promising results on face clustering and motion segmentation tasks compared to
the state-of-the-art subspace clustering algorithms.Comment: Journal, code is availabl
Top-N Recommender System via Matrix Completion
Top-N recommender systems have been investigated widely both in industry and
academia. However, the recommendation quality is far from satisfactory. In this
paper, we propose a simple yet promising algorithm. We fill the user-item
matrix based on a low-rank assumption and simultaneously keep the original
information. To do that, a nonconvex rank relaxation rather than the nuclear
norm is adopted to provide a better rank approximation and an efficient
optimization strategy is designed. A comprehensive set of experiments on real
datasets demonstrates that our method pushes the accuracy of Top-N
recommendation to a new level.Comment: AAAI 201
Twin Learning for Similarity and Clustering: A Unified Kernel Approach
Many similarity-based clustering methods work in two separate steps including
similarity matrix computation and subsequent spectral clustering. However,
similarity measurement is challenging because it is usually impacted by many
factors, e.g., the choice of similarity metric, neighborhood size, scale of
data, noise and outliers. Thus the learned similarity matrix is often not
suitable, let alone optimal, for the subsequent clustering. In addition,
nonlinear similarity often exists in many real world data which, however, has
not been effectively considered by most existing methods. To tackle these two
challenges, we propose a model to simultaneously learn cluster indicator matrix
and similarity information in kernel spaces in a principled way. We show
theoretical relationships to kernel k-means, k-means, and spectral clustering
methods. Then, to address the practical issue of how to select the most
suitable kernel for a particular clustering task, we further extend our model
with a multiple kernel learning ability. With this joint model, we can
automatically accomplish three subtasks of finding the best cluster indicator
matrix, the most accurate similarity relations and the optimal combination of
multiple kernels. By leveraging the interactions between these three subtasks
in a joint framework, each subtask can be iteratively boosted by using the
results of the others towards an overall optimal solution. Extensive
experiments are performed to demonstrate the effectiveness of our method.Comment: Published in AAAI 201
LogDet Rank Minimization with Application to Subspace Clustering
Low-rank matrix is desired in many machine learning and computer vision
problems. Most of the recent studies use the nuclear norm as a convex surrogate
of the rank operator. However, all singular values are simply added together by
the nuclear norm, and thus the rank may not be well approximated in practical
problems. In this paper, we propose to use a log-determinant (LogDet) function
as a smooth and closer, though non-convex, approximation to rank for obtaining
a low-rank representation in subspace clustering. Augmented Lagrange
multipliers strategy is applied to iteratively optimize the LogDet-based
non-convex objective function on potentially large-scale data. By making use of
the angular information of principal directions of the resultant low-rank
representation, an affinity graph matrix is constructed for spectral
clustering. Experimental results on motion segmentation and face clustering
data demonstrate that the proposed method often outperforms state-of-the-art
subspace clustering algorithms.Comment: 10 pages, 4 figure
- …
