246 research outputs found

    A Nonconvex Nonsmooth Regularization Method for Compressed Sensing and Low-Rank Matrix Completion

    Full text link
    In this paper, nonconvex and nonsmooth models for compressed sensing (CS) and low rank matrix completion (MC) is studied. The problem is formulated as a nonconvex regularized leat square optimization problems, in which the l0-norm and the rank function are replaced by l1-norm and nuclear norm, and adding a nonconvex penalty function respectively. An alternating minimization scheme is developed, and the existence of a subsequence, which generate by the alternating algorithm that converges to a critical point, is proved. The NSP, RIP, and RIP condition for stable recovery guarantees also be analysed for the nonconvex regularized CS and MC problems respectively. Finally, the performance of the proposed method is demonstrated through experimental results.Comment: 19 pages,4 figure

    A Survey on Nonconvex Regularization Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning

    Full text link
    In the past decade, sparse and low-rank recovery have drawn much attention in many areas such as signal/image processing, statistics, bioinformatics and machine learning. To achieve sparsity and/or low-rankness inducing, the â„“1\ell_1 norm and nuclear norm are of the most popular regularization penalties due to their convexity. While the â„“1\ell_1 and nuclear norm are convenient as the related convex optimization problems are usually tractable, it has been shown in many applications that a nonconvex penalty can yield significantly better performance. In recent, nonconvex regularization based sparse and low-rank recovery is of considerable interest and it in fact is a main driver of the recent progress in nonconvex and nonsmooth optimization. This paper gives an overview of this topic in various fields in signal processing, statistics and machine learning, including compressive sensing (CS), sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA. We present recent developments of nonconvex regularization based sparse and low-rank recovery in these fields, addressing the issues of penalty selection, applications and the convergence of nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git.Comment: 22 page

    From Group Sparse Coding to Rank Minimization: A Novel Denoising Model for Low-level Image Restoration

    Full text link
    Recently, low-rank matrix recovery theory has been emerging as a significant progress for various image processing problems. Meanwhile, the group sparse coding (GSC) theory has led to great successes in image restoration (IR) problem with each group contains low-rank property. In this paper, we propose a novel low-rank minimization based denoising model for IR tasks under the perspective of GSC, an important connection between our denoising model and rank minimization problem has been put forward. To overcome the bias problem caused by convex nuclear norm minimization (NNM) for rank approximation, a more generalized and flexible rank relaxation function is employed, namely weighted nonconvex relaxation. Accordingly, an efficient iteratively-reweighted algorithm is proposed to handle the resulting minimization problem combing with the popular L_(1/2) and L_(2/3) thresholding operators. Finally, our proposed denoising model is applied to IR problems via an alternating direction method of multipliers (ADMM) strategy. Typical IR experiments on image compressive sensing (CS), inpainting, deblurring and impulsive noise removal demonstrate that our proposed method can achieve significantly higher PSNR/FSIM values than many relevant state-of-the-art methods.Comment: Accepted by Signal Processin

    Basis Pursuit Denoise with Nonsmooth Constraints

    Full text link
    Level-set optimization formulations with data-driven constraints minimize a regularization functional subject to matching observations to a given error level. These formulations are widely used, particularly for matrix completion and sparsity promotion in data interpolation and denoising. The misfit level is typically measured in the l2 norm, or other smooth metrics. In this paper, we present a new flexible algorithmic framework that targets nonsmooth level-set constraints, including L1, Linf, and even L0 norms. These constraints give greater flexibility for modeling deviations in observation and denoising, and have significant impact on the solution. Measuring error in the L1 and L0 norms makes the result more robust to large outliers, while matching many observations exactly. We demonstrate the approach for basis pursuit denoise (BPDN) problems as well as for extensions of BPDN to matrix factorization, with applications to interpolation and denoising of 5D seismic data. The new methods are particularly promising for seismic applications, where the amplitude in the data varies significantly, and measurement noise in low-amplitude regions can wreak havoc for standard Gaussian error models.Comment: 11 pages, 10 figure

    A three-operator splitting algorithm for nonconvex sparsity regularization

    Full text link
    Sparsity regularization has been largely applied in many fields, such as signal and image processing and machine learning. In this paper, we mainly consider nonconvex minimization problems involving three terms, for the applications such as: sparse signal recovery and low rank matrix recovery. We employ a three-operator splitting proposed by Davis and Yin (called DYS) to solve the resulting possibly nonconvex problems and develop the convergence theory for this three-operator splitting algorithm in the nonconvex case. We show that if the step size is chosen less than a computable threshold, then the whole sequence converges to a stationary point. By defining a new decreasing energy function associated with the DYS method, we establish the global convergence of the whole sequence and a local convergence rate under an additional assumption that this energy function is a Kurdyka-\Lojasiewicz function. We also provide sufficient conditions for the boundedness of the generated sequence. Finally, some numerical experiments are conducted to compare the DYS algorithm with some classical efficient algorithms for sparse signal recovery and low rank matrix completion. The numerical results indicate that DYS method outperforms the exsiting methods for these specific applications.Comment: 26 pages. Submitte

    A Unified Framework for Sparse Relaxed Regularized Regression: SR3

    Full text link
    Regularized regression problems are ubiquitous in statistical modeling, signal processing, and machine learning. Sparse regression in particular has been instrumental in scientific model discovery, including compressed sensing applications, variable selection, and high-dimensional analysis. We propose a broad framework for sparse relaxed regularized regression, called SR3. The key idea is to solve a relaxation of the regularized problem, which has three advantages over the state-of-the-art: (1) solutions of the relaxed problem are superior with respect to errors, false positives, and conditioning, (2) relaxation allows extremely fast algorithms for both convex and nonconvex formulations, and (3) the methods apply to composite regularizers such as total variation (TV) and its nonconvex variants. We demonstrate the advantages of SR3 (computational efficiency, higher accuracy, faster convergence rates, greater flexibility) across a range of regularized regression problems with synthetic and real data, including applications in compressed sensing, LASSO, matrix completion, TV regularization, and group sparsity. To promote reproducible research, we also provide a companion MATLAB package that implements these examples.Comment: 19 pages, 14 figure

    Proximal linearized iteratively reweighted least squares for a class of nonconvex and nonsmooth problems

    Full text link
    For solving a wide class of nonconvex and nonsmooth problems, we propose a proximal linearized iteratively reweighted least squares (PL-IRLS) algorithm. We first approximate the original problem by smoothing methods, and second write the approximated problem into an auxiliary problem by introducing new variables. PL-IRLS is then built on solving the auxiliary problem by utilizing the proximal linearization technique and the iteratively reweighted least squares (IRLS) method, and has remarkable computation advantages. We show that PL-IRLS can be extended to solve more general nonconvex and nonsmooth problems via adjusting generalized parameters, and also to solve nonconvex and nonsmooth problems with two or more blocks of variables. Theoretically, with the help of the Kurdyka- Lojasiewicz property, we prove that each bounded sequence generated by PL-IRLS globally converges to a critical point of the approximated problem. To the best of our knowledge, this is the first global convergence result of applying IRLS idea to solve nonconvex and nonsmooth problems. At last, we apply PL-IRLS to solve three representative nonconvex and nonsmooth problems in sparse signal recovery and low-rank matrix recovery and obtain new globally convergent algorithms.Comment: 23 page

    Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview

    Full text link
    Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.Comment: Invited overview articl

    Linearized ADMM for Non-convex Non-smooth Optimization with Convergence Analysis

    Full text link
    Linearized alternating direction method of multipliers (ADMM) as an extension of ADMM has been widely used to solve linearly constrained problems in signal processing, machine leaning, communications, and many other fields. Despite its broad applications in nonconvex optimization, for a great number of nonconvex and nonsmooth objective functions, its theoretical convergence guarantee is still an open problem. In this paper, we propose a two-block linearized ADMM and a multi-block parallel linearized ADMM for problems with nonconvex and nonsmooth objectives. Mathematically, we present that the algorithms can converge for a broader class of objective functions under less strict assumptions compared with previous works. Furthermore, our proposed algorithm can update coupled variables in parallel and work for less restrictive nonconvex problems, where the traditional ADMM may have difficulties in solving subproblems.Comment: 29 pages, 2 tables, 2 figure

    Matrix Completion via Nonconvex Regularization: Convergence of the Proximal Gradient Algorithm

    Full text link
    Matrix completion has attracted much interest in the past decade in machine learning and computer vision. For low-rank promotion in matrix completion, the nuclear norm penalty is convenient due to its convexity but has a bias problem. Recently, various algorithms using nonconvex penalties have been proposed, among which the proximal gradient descent (PGD) algorithm is one of the most efficient and effective. For the nonconvex PGD algorithm, whether it converges to a local minimizer and its convergence rate are still unclear. This work provides a nontrivial analysis on the PGD algorithm in the nonconvex case. Besides the convergence to a stationary point for a generalized nonconvex penalty, we provide more deep analysis on a popular and important class of nonconvex penalties which have discontinuous thresholding functions. For such penalties, we establish the finite rank convergence, convergence to restricted strictly local minimizer and eventually linear convergence rate of the PGD algorithm. Meanwhile, convergence to a local minimizer has been proved for the hard-thresholding penalty. Our result is the first shows that, nonconvex regularized matrix completion only has restricted strictly local minimizers, and the PGD algorithm can converge to such minimizers with eventually linear rate under certain conditions. Illustration of the PGD algorithm via experiments has also been provided. Code is available at https://github.com/FWen/nmc.Comment: 14 pages, 7 figure
    • …
    corecore