367 research outputs found

    An accelerated proximal iterative hard thresholding method for β„“0\ell_0 minimization

    Full text link
    In this paper, we consider a non-convex problem which is the sum of β„“0\ell_0-norm and a convex smooth function under box constraint. We propose one proximal iterative hard thresholding type method with extrapolation step used for acceleration and establish its global convergence results. In detail, the sequence generated by the proposed method globally converges to a local minimizer of the objective function. Finally, we conduct numerical experiments to show the proposed method's effectiveness on comparison with some other efficient methods

    Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset

    Full text link
    Recent research on problem formulations based on decomposition into low-rank plus sparse matrices shows a suitable framework to separate moving objects from the background. The most representative problem formulation is the Robust Principal Component Analysis (RPCA) solved via Principal Component Pursuit (PCP) which decomposes a data matrix in a low-rank matrix and a sparse matrix. However, similar robust implicit or explicit decompositions can be made in the following problem formulations: Robust Non-negative Matrix Factorization (RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal of these similar problem formulations is to obtain explicitly or implicitly a decomposition into low-rank matrix plus additive matrices. In this context, this work aims to initiate a rigorous and comprehensive review of the similar problem formulations in robust subspace learning and tracking based on decomposition into low-rank plus additive matrices for testing and ranking existing algorithms for background/foreground separation. For this, we first provide a preliminary review of the recent developments in the different problem formulations which allows us to define a unified view that we called Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine carefully each method in each robust subspace learning/tracking frameworks with their decomposition, their loss functions, their optimization problem and their solvers. Furthermore, we investigate if incremental algorithms and real-time implementations can be achieved for background/foreground separation. Finally, experimental results on a large-scale dataset called Background Models Challenge (BMC 2012) show the comparative performance of 32 different robust subspace learning/tracking methods.Comment: 121 pages, 5 figures, submitted to Computer Science Review. arXiv admin note: text overlap with arXiv:1312.7167, arXiv:1109.6297, arXiv:1207.3438, arXiv:1105.2126, arXiv:1404.7592, arXiv:1210.0805, arXiv:1403.8067 by other authors, Computer Science Review, November 201

    Efficient Algorithms for Robust and Stable Principal Component Pursuit Problems

    Full text link
    The problem of recovering a low-rank matrix from a set of observations corrupted with gross sparse error is known as the robust principal component analysis (RPCA) and has many applications in computer vision, image processing and web data ranking. It has been shown that under certain conditions, the solution to the NP-hard RPCA problem can be obtained by solving a convex optimization problem, namely the robust principal component pursuit (RPCP). Moreover, if the observed data matrix has also been corrupted by a dense noise matrix in addition to gross sparse error, then the stable principal component pursuit (SPCP) problem is solved to recover the low-rank matrix. In this paper, we develop efficient algorithms with provable iteration complexity bounds for solving RPCP and SPCP. Numerical results on problems with millions of variables and constraints such as foreground extraction from surveillance video, shadow and specularity removal from face images and video denoising from heavily corrupted data show that our algorithms are competitive to current state-of-the-art solvers for RPCP and SPCP in terms of accuracy and speed

    Projected Wirtinger Gradient Descent for Low-Rank Hankel Matrix Completion in Spectral Compressed Sensing

    Full text link
    This paper considers reconstructing a spectrally sparse signal from a small number of randomly observed time-domain samples. The signal of interest is a linear combination of complex sinusoids at RR distinct frequencies. The frequencies can assume any continuous values in the normalized frequency domain [0,1)[0,1). After converting the spectrally sparse signal recovery into a low rank structured matrix completion problem, we propose an efficient feasible point approach, named projected Wirtinger gradient descent (PWGD) algorithm, to efficiently solve this structured matrix completion problem. We further accelerate our proposed algorithm by a scheme inspired by FISTA. We give the convergence analysis of our proposed algorithms. Extensive numerical experiments are provided to illustrate the efficiency of our proposed algorithm. Different from earlier approaches, our algorithm can solve problems of very large dimensions very efficiently.Comment: 12 page

    A survey of sparse representation: algorithms and applications

    Full text link
    Sparse representation has attracted much attention from researchers in fields of signal processing, image processing, computer vision and pattern recognition. Sparse representation also has a good reputation in both theoretical research and practical applications. Many different algorithms have been proposed for sparse representation. The main purpose of this article is to provide a comprehensive study and an updated review on sparse representation and to supply a guidance for researchers. The taxonomy of sparse representation methods can be studied from various viewpoints. For example, in terms of different norm minimizations used in sparsity constraints, the methods can be roughly categorized into five groups: sparse representation with l0l_0-norm minimization, sparse representation with lpl_p-norm (0<<p<<1) minimization, sparse representation with l1l_1-norm minimization and sparse representation with l2,1l_{2,1}-norm minimization. In this paper, a comprehensive overview of sparse representation is provided. The available sparse representation algorithms can also be empirically categorized into four groups: greedy strategy approximation, constrained optimization, proximity algorithm-based optimization, and homotopy algorithm-based sparse representation. The rationales of different algorithms in each category are analyzed and a wide range of sparse representation applications are summarized, which could sufficiently reveal the potential nature of the sparse representation theory. Specifically, an experimentally comparative study of these sparse representation algorithms was presented. The Matlab code used in this paper can be available at: http://www.yongxu.org/lunwen.html.Comment: Published on IEEE Access, Vol. 3, pp. 490-530, 201

    A Survey on Nonconvex Regularization Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning

    Full text link
    In the past decade, sparse and low-rank recovery have drawn much attention in many areas such as signal/image processing, statistics, bioinformatics and machine learning. To achieve sparsity and/or low-rankness inducing, the β„“1\ell_1 norm and nuclear norm are of the most popular regularization penalties due to their convexity. While the β„“1\ell_1 and nuclear norm are convenient as the related convex optimization problems are usually tractable, it has been shown in many applications that a nonconvex penalty can yield significantly better performance. In recent, nonconvex regularization based sparse and low-rank recovery is of considerable interest and it in fact is a main driver of the recent progress in nonconvex and nonsmooth optimization. This paper gives an overview of this topic in various fields in signal processing, statistics and machine learning, including compressive sensing (CS), sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA. We present recent developments of nonconvex regularization based sparse and low-rank recovery in these fields, addressing the issues of penalty selection, applications and the convergence of nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git.Comment: 22 page

    Exploiting the structure effectively and efficiently in low rank matrix recovery

    Full text link
    Low rank model arises from a wide range of applications, including machine learning, signal processing, computer algebra, computer vision, and imaging science. Low rank matrix recovery is about reconstructing a low rank matrix from incomplete measurements. In this survey we review recent developments on low rank matrix recovery, focusing on three typical scenarios: matrix sensing, matrix completion and phase retrieval. An overview of effective and efficient approaches for the problem is given, including nuclear norm minimization, projected gradient descent based on matrix factorization, and Riemannian optimization based on the embedded manifold of low rank matrices. Numerical recipes of different approaches are emphasized while accompanied by the corresponding theoretical recovery guarantees

    Modified lp-norm regularization minimization for sparse signal recovery

    Full text link
    In numerous substitution models for the \l_{0}-norm minimization problem (P0)(P_{0}), the \l_{p}-norm minimization (Pp)(P_{p}) with 0<p<10<p<1 have been considered as the most natural choice. However, the non-convex optimization problem (Pp)(P_{p}) are much more computational challenges, and are also NP-hard. Meanwhile, the algorithms corresponding to the proximal mapping of the regularization \l_{p}-norm minimization (PpΞ»)(P_{p}^{\lambda}) are limited to few specific values of parameter pp. In this paper, we replace the β„“p\ell_{p}-norm βˆ₯xβˆ₯pp\|x\|_{p}^{p} with a modified function βˆ‘i=1n∣xi∣(∣xi∣+Ο΅i)1βˆ’p\sum_{i=1}^{n}\frac{|x_{i}|}{(|x_{i}|+\epsilon_{i})^{1-p}}. With change the parameter Ο΅>0\epsilon>0, this modified function would like to interpolate the \l_{p}-norm βˆ₯xβˆ₯pp\|x\|_{p}^{p}. By this transformation, we translated the \l_{p}-norm regularization minimization (PpΞ»)(P_{p}^{\lambda}) into a modified \l_{p}-norm regularization minimization (PpΞ»,Ο΅)(P_{p}^{\lambda,\epsilon}). Then, we develop the thresholding representation theory of the problem (PpΞ»,Ο΅)(P_{p}^{\lambda,\epsilon}), and based on it, the IT algorithm is proposed to solve the problem (PpΞ»,Ο΅)(P_{p}^{\lambda,\epsilon}) for all 0<p<10<p<1. Indeed, we could get some much better results by choosing proper pp, which is one of the advantages for our algorithm compared with other methods. Numerical results also show that, for some proper pp, our algorithm performs the best in some sparse signal recovery problems compared with some state-of-art methods

    Convergence of a Relaxed Variable Splitting Method for Learning Sparse Neural Networks via β„“1,β„“0\ell_1, \ell_0, and transformed-β„“1\ell_1 Penalties

    Full text link
    Sparsification of neural networks is one of the effective complexity reduction methods to improve efficiency and generalizability. We consider the problem of learning a one hidden layer convolutional neural network with ReLU activation function via gradient descent under sparsity promoting penalties. It is known that when the input data is Gaussian distributed, no-overlap networks (without penalties) in regression problems with ground truth can be learned in polynomial time at high probability. We propose a relaxed variable splitting method integrating thresholding and gradient descent to overcome the lack of non-smoothness in the loss function. The sparsity in network weight is realized during the optimization (training) process. We prove that under β„“1,β„“0\ell_1, \ell_0; and transformed-β„“1\ell_1 penalties, no-overlap networks can be learned with high probability, and the iterative weights converge to a global limit which is a transformation of the true weight under a novel thresholding operation. Numerical experiments confirm theoretical findings, and compare the accuracy and sparsity trade-off among the penalties

    Positive Definite Estimation of Large Covariance Matrix Using Generalized Nonconvex Penalties

    Full text link
    This work addresses the issue of large covariance matrix estimation in high-dimensional statistical analysis. Recently, improved iterative algorithms with positive-definite guarantee have been developed. However, these algorithms cannot be directly extended to use a nonconvex penalty for sparsity inducing. Generally, a nonconvex penalty has the capability of ameliorating the bias problem of the popular convex lasso penalty, and thus is more advantageous. In this work, we propose a class of positive-definite covariance estimators using generalized nonconvex penalties. We develop a first-order algorithm based on the alternating direction method framework to solve the nonconvex optimization problem efficiently. The convergence of this algorithm has been proved. Further, the statistical properties of the new estimators have been analyzed for generalized nonconvex penalties. Moreover, extension of this algorithm to covariance estimation from sketched measurements has been considered. The performances of the new estimators have been demonstrated by both a simulation study and a gene clustering example for tumor tissues. Code for the proposed estimators is available at https://github.com/FWen/Nonconvex-PDLCE.git.Comment: 15 pages, 8 figure
    • …
    corecore