1,224 research outputs found
A Survey on Nonconvex Regularization Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning
In the past decade, sparse and low-rank recovery have drawn much attention in
many areas such as signal/image processing, statistics, bioinformatics and
machine learning. To achieve sparsity and/or low-rankness inducing, the
norm and nuclear norm are of the most popular regularization penalties
due to their convexity. While the and nuclear norm are convenient as
the related convex optimization problems are usually tractable, it has been
shown in many applications that a nonconvex penalty can yield significantly
better performance. In recent, nonconvex regularization based sparse and
low-rank recovery is of considerable interest and it in fact is a main driver
of the recent progress in nonconvex and nonsmooth optimization. This paper
gives an overview of this topic in various fields in signal processing,
statistics and machine learning, including compressive sensing (CS), sparse
regression and variable selection, sparse signals separation, sparse principal
component analysis (PCA), large covariance and inverse covariance matrices
estimation, matrix completion, and robust PCA. We present recent developments
of nonconvex regularization based sparse and low-rank recovery in these fields,
addressing the issues of penalty selection, applications and the convergence of
nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git.Comment: 22 page
Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
Substantial progress has been made recently on developing provably accurate
and efficient algorithms for low-rank matrix factorization via nonconvex
optimization. While conventional wisdom often takes a dim view of nonconvex
optimization algorithms due to their susceptibility to spurious local minima,
simple iterative methods such as gradient descent have been remarkably
successful in practice. The theoretical footings, however, had been largely
lacking until recently.
In this tutorial-style overview, we highlight the important role of
statistical models in enabling efficient nonconvex optimization with
performance guarantees. We review two contrasting approaches: (1) two-stage
algorithms, which consist of a tailored initialization step followed by
successive refinement; and (2) global landscape analysis and
initialization-free algorithms. Several canonical matrix factorization problems
are discussed, including but not limited to matrix sensing, phase retrieval,
matrix completion, blind deconvolution, robust principal component analysis,
phase synchronization, and joint alignment. Special care is taken to illustrate
the key technical insights underlying their analyses. This article serves as a
testament that the integrated consideration of optimization and statistics
leads to fruitful research findings.Comment: Invited overview articl
Harnessing Structures in Big Data via Guaranteed Low-Rank Matrix Estimation
Low-rank modeling plays a pivotal role in signal processing and machine
learning, with applications ranging from collaborative filtering, video
surveillance, medical imaging, to dimensionality reduction and adaptive
filtering. Many modern high-dimensional data and interactions thereof can be
modeled as lying approximately in a low-dimensional subspace or manifold,
possibly with additional structures, and its proper exploitations lead to
significant reduction of costs in sensing, computation and storage. In recent
years, there is a plethora of progress in understanding how to exploit low-rank
structures using computationally efficient procedures in a provable manner,
including both convex and nonconvex approaches. On one side, convex relaxations
such as nuclear norm minimization often lead to statistically optimal
procedures for estimating low-rank matrices, where first-order methods are
developed to address the computational challenges; on the other side, there is
emerging evidence that properly designed nonconvex procedures, such as
projected gradient descent, often provide globally optimal solutions with a
much lower computational cost in many problems. This survey article will
provide a unified overview of these recent advances on low-rank matrix
estimation from incomplete measurements. Attention is paid to rigorous
characterization of the performance of these algorithms, and to problems where
the low-rank matrix have additional structural properties that require new
algorithmic designs and theoretical analysis.Comment: To appear in IEEE Signal Processing Magazin
Matrix Completion via Nonconvex Regularization: Convergence of the Proximal Gradient Algorithm
Matrix completion has attracted much interest in the past decade in machine
learning and computer vision. For low-rank promotion in matrix completion, the
nuclear norm penalty is convenient due to its convexity but has a bias problem.
Recently, various algorithms using nonconvex penalties have been proposed,
among which the proximal gradient descent (PGD) algorithm is one of the most
efficient and effective. For the nonconvex PGD algorithm, whether it converges
to a local minimizer and its convergence rate are still unclear. This work
provides a nontrivial analysis on the PGD algorithm in the nonconvex case.
Besides the convergence to a stationary point for a generalized nonconvex
penalty, we provide more deep analysis on a popular and important class of
nonconvex penalties which have discontinuous thresholding functions. For such
penalties, we establish the finite rank convergence, convergence to restricted
strictly local minimizer and eventually linear convergence rate of the PGD
algorithm. Meanwhile, convergence to a local minimizer has been proved for the
hard-thresholding penalty. Our result is the first shows that, nonconvex
regularized matrix completion only has restricted strictly local minimizers,
and the PGD algorithm can converge to such minimizers with eventually linear
rate under certain conditions. Illustration of the PGD algorithm via
experiments has also been provided. Code is available at
https://github.com/FWen/nmc.Comment: 14 pages, 7 figure
A New Nonconvex Strategy to Affine Matrix Rank Minimization Problem
The affine matrix rank minimization (AMRM) problem is to find a matrix of
minimum rank that satisfies a given linear system constraint. It has many
applications in some important areas such as control, recommender systems,
matrix completion and network localization. However, the problem (AMRM) is
NP-hard in general due to the combinational nature of the matrix rank function.
There are many alternative functions have been proposed to substitute the
matrix rank function, which lead to many corresponding alternative minimization
problems solved efficiently by some popular convex or nonconvex optimization
algorithms. In this paper, we propose a new nonconvex function, namely,
function (with ), to
approximate the rank function, and translate the NP-hard problem (AMRM) into
the function affine matrix rank minimization (TLAMRM)
problem. Firstly, we study the equivalence of problem (AMRM) and (TLAMRM), and
proved that the uniqueness of global minimizer of the problem (TLAMRM) also
solves the NP-hard problem (AMRM) if the linear map satisfies a
restricted isometry property (RIP). Secondly, an iterative thresholding
algorithm is proposed to solve the regularization problem (RTLAMRM) for all
. At last, some numerical results on low-rank
matrix completion problems illustrated that our algorithm is able to recover a
low-rank matrix, and the extensive numerical on image inpainting problems shown
that our algorithm performs the best in finding a low-rank image compared with
some state-of-art methods
Exploiting the structure effectively and efficiently in low rank matrix recovery
Low rank model arises from a wide range of applications, including machine
learning, signal processing, computer algebra, computer vision, and imaging
science. Low rank matrix recovery is about reconstructing a low rank matrix
from incomplete measurements. In this survey we review recent developments on
low rank matrix recovery, focusing on three typical scenarios: matrix sensing,
matrix completion and phase retrieval. An overview of effective and efficient
approaches for the problem is given, including nuclear norm minimization,
projected gradient descent based on matrix factorization, and Riemannian
optimization based on the embedded manifold of low rank matrices. Numerical
recipes of different approaches are emphasized while accompanied by the
corresponding theoretical recovery guarantees
Nonconvex and Nonsmooth Sparse Optimization via Adaptively Iterative Reweighted Methods
We present a general formulation of nonconvex and nonsmooth sparse
optimization problems with a convexset constraint, which takes into account
most existing types of nonconvex sparsity-inducing terms. It thus brings strong
applicability to a wide range of applications. We further design a general
algorithmic framework of adaptively iterative reweighted algorithms for solving
the nonconvex and nonsmooth sparse optimization problems. This is achieved by
solving a sequence of weighted convex penalty subproblems with adaptively
updated weights. The first-order optimality condition is then derived and the
global convergence results are provided under loose assumptions. This makes our
theoretical results a practical tool for analyzing a family of various
iteratively reweighted algorithms. In particular, for the iteratively reweighed
-algorithm, global convergence analysis is provided for cases with
diminishing relaxation parameter. For the iteratively reweighed
-algorithm, adaptively decreasing relaxation parameter is applicable
and the existence of the cluster point to the algorithm is established. The
effectiveness and efficiency of our proposed formulation and the algorithms are
demonstrated in numerical experiments in various sparse optimization problems
Low-Rank Modeling and Its Applications in Image Analysis
Low-rank modeling generally refers to a class of methods that solve problems
by representing variables of interest as low-rank matrices. It has achieved
great success in various fields including computer vision, data mining, signal
processing and bioinformatics. Recently, much progress has been made in
theories, algorithms and applications of low-rank modeling, such as exact
low-rank matrix recovery via convex programming and matrix completion applied
to collaborative filtering. These advances have brought more and more
attentions to this topic. In this paper, we review the recent advance of
low-rank modeling, the state-of-the-art algorithms, and related applications in
image analysis. We first give an overview to the concept of low-rank modeling
and challenging problems in this area. Then, we summarize the models and
algorithms for low-rank matrix recovery and illustrate their advantages and
limitations with numerical experiments. Next, we introduce a few applications
of low-rank modeling in the context of image analysis. Finally, we conclude
this paper with some discussions.Comment: To appear in ACM Computing Survey
Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset
Recent research on problem formulations based on decomposition into low-rank
plus sparse matrices shows a suitable framework to separate moving objects from
the background. The most representative problem formulation is the Robust
Principal Component Analysis (RPCA) solved via Principal Component Pursuit
(PCP) which decomposes a data matrix in a low-rank matrix and a sparse matrix.
However, similar robust implicit or explicit decompositions can be made in the
following problem formulations: Robust Non-negative Matrix Factorization
(RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust
Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal
of these similar problem formulations is to obtain explicitly or implicitly a
decomposition into low-rank matrix plus additive matrices. In this context,
this work aims to initiate a rigorous and comprehensive review of the similar
problem formulations in robust subspace learning and tracking based on
decomposition into low-rank plus additive matrices for testing and ranking
existing algorithms for background/foreground separation. For this, we first
provide a preliminary review of the recent developments in the different
problem formulations which allows us to define a unified view that we called
Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine
carefully each method in each robust subspace learning/tracking frameworks with
their decomposition, their loss functions, their optimization problem and their
solvers. Furthermore, we investigate if incremental algorithms and real-time
implementations can be achieved for background/foreground separation. Finally,
experimental results on a large-scale dataset called Background Models
Challenge (BMC 2012) show the comparative performance of 32 different robust
subspace learning/tracking methods.Comment: 121 pages, 5 figures, submitted to Computer Science Review. arXiv
admin note: text overlap with arXiv:1312.7167, arXiv:1109.6297,
arXiv:1207.3438, arXiv:1105.2126, arXiv:1404.7592, arXiv:1210.0805,
arXiv:1403.8067 by other authors, Computer Science Review, November 201
Binary matrix completion with nonconvex regularizers
Many practical problems involve the recovery of a binary matrix from partial
information, which makes the binary matrix completion (BMC) technique received
increasing attention in machine learning. In particular, we consider a special
case of BMC problem, in which only a subset of positive elements can be
observed. In recent years, convex regularization based methods are the
mainstream approaches for this task. However, the applications of nonconvex
surrogates in standard matrix completion have demonstrated better empirical
performance. Accordingly, we propose a novel BMC model with nonconvex
regularizers and provide the recovery guarantee for the model. Furthermore, for
solving the resultant nonconvex optimization problem, we improve the popular
proximal algorithm with acceleration strategies. It can be guaranteed that the
convergence rate of the algorithm is in the order of , where is the
number of iterations. Extensive experiments conducted on both synthetic and
real-world data sets demonstrate the superiority of the proposed approach over
other competing methods
- …