2,760 research outputs found
Fast Low-Rank Matrix Learning with Nonconvex Regularization
Low-rank modeling has a lot of important applications in machine learning,
computer vision and social network analysis. While the matrix rank is often
approximated by the convex nuclear norm, the use of nonconvex low-rank
regularizers has demonstrated better recovery performance. However, the
resultant optimization problem is much more challenging. A very recent
state-of-the-art is based on the proximal gradient algorithm. However, it
requires an expensive full SVD in each proximal step. In this paper, we show
that for many commonly-used nonconvex low-rank regularizers, a cutoff can be
derived to automatically threshold the singular values obtained from the
proximal operator. This allows the use of power method to approximate the SVD
efficiently. Besides, the proximal operator can be reduced to that of a much
smaller matrix projected onto this leading subspace. Convergence, with a rate
of O(1/T) where T is the number of iterations, can be guaranteed. Extensive
experiments are performed on matrix completion and robust principal component
analysis. The proposed method achieves significant speedup over the
state-of-the-art. Moreover, the matrix solution obtained is more accurate and
has a lower rank than that of the traditional nuclear norm regularizer.Comment: Long version of conference paper appeared ICDM 201
Successive Convex Approximation Algorithms for Sparse Signal Estimation with Nonconvex Regularizations
In this paper, we propose a successive convex approximation framework for
sparse optimization where the nonsmooth regularization function in the
objective function is nonconvex and it can be written as the difference of two
convex functions. The proposed framework is based on a nontrivial combination
of the majorization-minimization framework and the successive convex
approximation framework proposed in literature for a convex regularization
function. The proposed framework has several attractive features, namely, i)
flexibility, as different choices of the approximate function lead to different
type of algorithms; ii) fast convergence, as the problem structure can be
better exploited by a proper choice of the approximate function and the
stepsize is calculated by the line search; iii) low complexity, as the
approximate function is convex and the line search scheme is carried out over a
differentiable function; iv) guaranteed convergence to a stationary point. We
demonstrate these features by two example applications in subspace learning,
namely, the network anomaly detection problem and the sparse subspace
clustering problem. Customizing the proposed framework by adopting the
best-response type approximation, we obtain soft-thresholding with exact line
search algorithms for which all elements of the unknown parameter are updated
in parallel according to closed-form expressions. The attractive features of
the proposed algorithms are illustrated numerically.Comment: submitted to IEEE Journal of Selected Topics in Signal Processing,
special issue in Robust Subspace Learnin
- …