80 research outputs found

    From Group Sparse Coding to Rank Minimization: A Novel Denoising Model for Low-level Image Restoration

    Full text link
    Recently, low-rank matrix recovery theory has been emerging as a significant progress for various image processing problems. Meanwhile, the group sparse coding (GSC) theory has led to great successes in image restoration (IR) problem with each group contains low-rank property. In this paper, we propose a novel low-rank minimization based denoising model for IR tasks under the perspective of GSC, an important connection between our denoising model and rank minimization problem has been put forward. To overcome the bias problem caused by convex nuclear norm minimization (NNM) for rank approximation, a more generalized and flexible rank relaxation function is employed, namely weighted nonconvex relaxation. Accordingly, an efficient iteratively-reweighted algorithm is proposed to handle the resulting minimization problem combing with the popular L_(1/2) and L_(2/3) thresholding operators. Finally, our proposed denoising model is applied to IR problems via an alternating direction method of multipliers (ADMM) strategy. Typical IR experiments on image compressive sensing (CS), inpainting, deblurring and impulsive noise removal demonstrate that our proposed method can achieve significantly higher PSNR/FSIM values than many relevant state-of-the-art methods.Comment: Accepted by Signal Processin

    A Survey on Nonconvex Regularization Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning

    Full text link
    In the past decade, sparse and low-rank recovery have drawn much attention in many areas such as signal/image processing, statistics, bioinformatics and machine learning. To achieve sparsity and/or low-rankness inducing, the â„“1\ell_1 norm and nuclear norm are of the most popular regularization penalties due to their convexity. While the â„“1\ell_1 and nuclear norm are convenient as the related convex optimization problems are usually tractable, it has been shown in many applications that a nonconvex penalty can yield significantly better performance. In recent, nonconvex regularization based sparse and low-rank recovery is of considerable interest and it in fact is a main driver of the recent progress in nonconvex and nonsmooth optimization. This paper gives an overview of this topic in various fields in signal processing, statistics and machine learning, including compressive sensing (CS), sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA. We present recent developments of nonconvex regularization based sparse and low-rank recovery in these fields, addressing the issues of penalty selection, applications and the convergence of nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git.Comment: 22 page

    A Benchmark for Sparse Coding: When Group Sparsity Meets Rank Minimization

    Full text link
    Sparse coding has achieved a great success in various image processing tasks. However, a benchmark to measure the sparsity of image patch/group is missing since sparse coding is essentially an NP-hard problem. This work attempts to fill the gap from the perspective of rank minimization. More details please see the manuscript....Comment: arXiv admin note: text overlap with arXiv:1611.0898

    Nonconvex Nonsmooth Low-Rank Minimization for Generalized Image Compressed Sensing via Group Sparse Representation

    Full text link
    Group sparse representation (GSR) based method has led to great successes in various image recovery tasks, which can be converted into a low-rank matrix minimization problem. As a widely used surrogate function of low-rank, the nuclear norm based convex surrogate usually leads to over-shrinking problem, since the standard soft-thresholding operator shrinks all singular values equally. To improve traditional sparse representation based image compressive sensing (CS) performance, we propose a generalized CS framework based on GSR model, which leads to a nonconvex nonsmooth low-rank minimization problem. The popular L_2-norm and M-estimator are employed for standard image CS and robust CS problem to fit the data respectively. For the better approximation of the rank of group-matrix, a family of nuclear norms are employed to address the over-shrinking problem. Moreover, we also propose a flexible and effective iteratively-weighting strategy to control the weighting and contribution of each singular value. Then we develop an iteratively reweighted nuclear norm algorithm for our generalized framework via an alternating direction method of multipliers framework, namely, GSR-AIR. Experimental results demonstrate that our proposed CS framework can achieve favorable reconstruction performance compared with current state-of-the-art methods and the robust CS framework can suppress the outliers effectively.Comment: This paper has been submitted to the Journal of the Franklin Institute. arXiv admin note: substantial text overlap with arXiv:1903.0978

    Non-Convex Weighted Lp Nuclear Norm based ADMM Framework for Image Restoration

    Full text link
    Since the matrix formed by nonlocal similar patches in a natural image is of low rank, the nuclear norm minimization (NNM) has been widely used in various image processing studies. Nonetheless, nuclear norm based convex surrogate of the rank function usually over-shrinks the rank components and makes different components equally, and thus may produce a result far from the optimum. To alleviate the above-mentioned limitations of the nuclear norm, in this paper we propose a new method for image restoration via the non-convex weighted Lp nuclear norm minimization (NCW-NNM), which is able to more accurately enforce the image structural sparsity and self-similarity simultaneously. To make the proposed model tractable and robust, the alternative direction multiplier method (ADMM) is adopted to solve the associated non-convex minimization problem. Experimental results on various types of image restoration problems, including image deblurring, image inpainting and image compressive sensing (CS) recovery, demonstrate that the proposed method outperforms many current state-of-the-art methods in both the objective and the perceptual qualities.Comment: arXiv admin note: text overlap with arXiv:1611.0898

    Global hard thresholding algorithms for joint sparse image representation and denoising

    Full text link
    Sparse coding of images is traditionally done by cutting them into small patches and representing each patch individually over some dictionary given a pre-determined number of nonzero coefficients to use for each patch. In lack of a way to effectively distribute a total number (or global budget) of nonzero coefficients across all patches, current sparse recovery algorithms distribute the global budget equally across all patches despite the wide range of differences in structural complexity among them. In this work we propose a new framework for joint sparse representation and recovery of all image patches simultaneously. We also present two novel global hard thresholding algorithms, based on the notion of variable splitting, for solving the joint sparse model. Experimentation using both synthetic and real data shows effectiveness of the proposed framework for sparse image representation and denoising tasks. Additionally, time complexity analysis of the proposed algorithms indicate high scalability of both algorithms, making them favorable to use on large megapixel images

    Sparse Optimization Problem with s-difference Regularization

    Full text link
    In this paper, a s-difference type regularization for sparse recovery problem is proposed, which is the difference of the normal penalty function R(x) and its corresponding struncated function R (xs). First, we show the equivalent conditions between the L0 constrained problem and the unconstrained s-difference penalty regularized problem. Next, we choose the forward-backward splitting (FBS) method to solve the nonconvex regularizes function and further derive some closed-form solutions for the proximal mapping of the s-difference regularization with some commonly used R(x), which makes the FBS easy and fast. We also show that any cluster point of the sequence generated by the proposed algorithm converges to a stationary point. Numerical experiments demonstrate the efficiency of the proposed s-difference regularization in comparison with some other existing penalty functions.Comment: 20 pages, 5 figure

    Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset

    Full text link
    Recent research on problem formulations based on decomposition into low-rank plus sparse matrices shows a suitable framework to separate moving objects from the background. The most representative problem formulation is the Robust Principal Component Analysis (RPCA) solved via Principal Component Pursuit (PCP) which decomposes a data matrix in a low-rank matrix and a sparse matrix. However, similar robust implicit or explicit decompositions can be made in the following problem formulations: Robust Non-negative Matrix Factorization (RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal of these similar problem formulations is to obtain explicitly or implicitly a decomposition into low-rank matrix plus additive matrices. In this context, this work aims to initiate a rigorous and comprehensive review of the similar problem formulations in robust subspace learning and tracking based on decomposition into low-rank plus additive matrices for testing and ranking existing algorithms for background/foreground separation. For this, we first provide a preliminary review of the recent developments in the different problem formulations which allows us to define a unified view that we called Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine carefully each method in each robust subspace learning/tracking frameworks with their decomposition, their loss functions, their optimization problem and their solvers. Furthermore, we investigate if incremental algorithms and real-time implementations can be achieved for background/foreground separation. Finally, experimental results on a large-scale dataset called Background Models Challenge (BMC 2012) show the comparative performance of 32 different robust subspace learning/tracking methods.Comment: 121 pages, 5 figures, submitted to Computer Science Review. arXiv admin note: text overlap with arXiv:1312.7167, arXiv:1109.6297, arXiv:1207.3438, arXiv:1105.2126, arXiv:1404.7592, arXiv:1210.0805, arXiv:1403.8067 by other authors, Computer Science Review, November 201

    A Memristor-Based Optimization Framework for AI Applications

    Full text link
    Memristors have recently received significant attention as ubiquitous device-level components for building a novel generation of computing systems. These devices have many promising features, such as non-volatility, low power consumption, high density, and excellent scalability. The ability to control and modify biasing voltages at the two terminals of memristors make them promising candidates to perform matrix-vector multiplications and solve systems of linear equations. In this article, we discuss how networks of memristors arranged in crossbar arrays can be used for efficiently solving optimization and machine learning problems. We introduce a new memristor-based optimization framework that combines the computational merit of memristor crossbars with the advantages of an operator splitting method, alternating direction method of multipliers (ADMM). Here, ADMM helps in splitting a complex optimization problem into subproblems that involve the solution of systems of linear equations. The capability of this framework is shown by applying it to linear programming, quadratic programming, and sparse optimization. In addition to ADMM, implementation of a customized power iteration (PI) method for eigenvalue/eigenvector computation using memristor crossbars is discussed. The memristor-based PI method can further be applied to principal component analysis (PCA). The use of memristor crossbars yields a significant speed-up in computation, and thus, we believe, has the potential to advance optimization and machine learning research in artificial intelligence (AI)

    Speeding Up Latent Variable Gaussian Graphical Model Estimation via Nonconvex Optimizations

    Full text link
    We study the estimation of the latent variable Gaussian graphical model (LVGGM), where the precision matrix is the superposition of a sparse matrix and a low-rank matrix. In order to speed up the estimation of the sparse plus low-rank components, we propose a sparsity constrained maximum likelihood estimator based on matrix factorization, and an efficient alternating gradient descent algorithm with hard thresholding to solve it. Our algorithm is orders of magnitude faster than the convex relaxation based methods for LVGGM. In addition, we prove that our algorithm is guaranteed to linearly converge to the unknown sparse and low-rank components up to the optimal statistical precision. Experiments on both synthetic and genomic data demonstrate the superiority of our algorithm over the state-of-the-art algorithms and corroborate our theory.Comment: 29 pages, 5 figures, 3 table
    • …
    corecore