11 research outputs found

    The perturbation analysis of nonconvex low-rank matrix robust recovery

    Full text link
    In this paper, we bring forward a completely perturbed nonconvex Schatten pp-minimization to address a model of completely perturbed low-rank matrix recovery. The paper that based on the restricted isometry property generalizes the investigation to a complete perturbation model thinking over not only noise but also perturbation, gives the restricted isometry property condition that guarantees the recovery of low-rank matrix and the corresponding reconstruction error bound. In particular, the analysis of the result reveals that in the case that pp decreases 00 and a>1a>1 for the complete perturbation and low-rank matrix, the condition is the optimal sufficient condition Ξ΄2r<1\delta_{2r}<1 \cite{Recht et al 2010}. The numerical experiments are conducted to show better performance, and provides outperformance of the nonconvex Schatten pp-minimization method comparing with the convex nuclear norm minimization approach in the completely perturbed scenario

    The high-order block RIP for non-convex block-sparse compressed sensing

    Full text link
    This paper concentrates on the recovery of block-sparse signals, which is not only sparse but also nonzero elements are arrayed into some blocks (clusters) rather than being arbitrary distributed all over the vector, from linear measurements. We establish high-order sufficient conditions based on block RIP to ensure the exact recovery of every block ss-sparse signal in the noiseless case via mixed l2/lpl_2/l_p minimization method, and the stable and robust recovery in the case that signals are not accurately block-sparse in the presence of noise. Additionally, a lower bound on necessary number of random Gaussian measurements is gained for the condition to be true with overwhelming probability. Furthermore, the numerical experiments conducted demonstrate the performance of the proposed algorithm

    Efficient and Robust Recovery of Signal and Image in Impulsive Noise via β„“1βˆ’Ξ±β„“2\ell_1-\alpha \ell_2 Minimization

    Full text link
    In this paper, we consider the efficient and robust reconstruction of signals and images via β„“1βˆ’Ξ±β„“2Β (0<α≀1)\ell_{1}-\alpha \ell_{2}~(0<\alpha\leq 1) minimization in impulsive noise case. To achieve this goal, we introduce two new models: the β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2 minimization with β„“1\ell_1 constraint, which is called β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2-LAD, the β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2 minimization with Dantzig selector constraint, which is called β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2-DS. We first show that sparse signals or nearly sparse signals can be exactly or stably recovered via β„“1βˆ’Ξ±β„“2\ell_{1}-\alpha\ell_{2} minimization under some conditions based on the restricted 11-isometry property (β„“1\ell_1-RIP). Second, for β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2-LAD model, we introduce unconstrained β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2 minimization model denoting β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2-PLAD and propose β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2LA algorithm to solve the β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2-PLAD. Last, numerical experiments %on success rates of sparse signal recovery demonstrate that when the sensing matrix is ill-conditioned (i.e., the coherence of the matrix is larger than 0.99), the β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2LA method is better than the existing convex and non-convex compressed sensing solvers for the recovery of sparse signals. And for the magnetic resonance imaging (MRI) reconstruction with impulsive noise, we show that the β„“1βˆ’Ξ±β„“2\ell_1-\alpha\ell_2LA method has better performance than state-of-the-art methods via numerical experiments.Comment: arXiv admin note: text overlap with arXiv:1703.07952 by other author

    Matrix Completion via Nonconvex Regularization: Convergence of the Proximal Gradient Algorithm

    Full text link
    Matrix completion has attracted much interest in the past decade in machine learning and computer vision. For low-rank promotion in matrix completion, the nuclear norm penalty is convenient due to its convexity but has a bias problem. Recently, various algorithms using nonconvex penalties have been proposed, among which the proximal gradient descent (PGD) algorithm is one of the most efficient and effective. For the nonconvex PGD algorithm, whether it converges to a local minimizer and its convergence rate are still unclear. This work provides a nontrivial analysis on the PGD algorithm in the nonconvex case. Besides the convergence to a stationary point for a generalized nonconvex penalty, we provide more deep analysis on a popular and important class of nonconvex penalties which have discontinuous thresholding functions. For such penalties, we establish the finite rank convergence, convergence to restricted strictly local minimizer and eventually linear convergence rate of the PGD algorithm. Meanwhile, convergence to a local minimizer has been proved for the hard-thresholding penalty. Our result is the first shows that, nonconvex regularized matrix completion only has restricted strictly local minimizers, and the PGD algorithm can converge to such minimizers with eventually linear rate under certain conditions. Illustration of the PGD algorithm via experiments has also been provided. Code is available at https://github.com/FWen/nmc.Comment: 14 pages, 7 figure

    A Simple Local Minimal Intensity Prior and An Improved Algorithm for Blind Image Deblurring

    Full text link
    Blind image deblurring is a long standing challenging problem in image processing and low-level vision. Recently, sophisticated priors such as dark channel prior, extreme channel prior, and local maximum gradient prior, have shown promising effectiveness. However, these methods are computationally expensive. Meanwhile, since these priors involved subproblems cannot be solved explicitly, approximate solution is commonly used, which limits the best exploitation of their capability. To address these problems, this work firstly proposes a simplified sparsity prior of local minimal pixels, namely patch-wise minimal pixels (PMP). The PMP of clear images is much more sparse than that of blurred ones, and hence is very effective in discriminating between clear and blurred images. Then, a novel algorithm is designed to efficiently exploit the sparsity of PMP in deblurring. The new algorithm flexibly imposes sparsity inducing on the PMP under the MAP framework rather than directly uses the half quadratic splitting algorithm. By this, it avoids non-rigorous approximation solution in existing algorithms, while being much more computationally efficient. Extensive experiments demonstrate that the proposed algorithm can achieve better practical stability compared with state-of-the-arts. In terms of deblurring quality, robustness and computational efficiency, the new algorithm is superior to state-of-the-arts. Code for reproducing the results of the new method is available at https://github.com/FWen/deblur-pmp.git.Comment: 14 pages, 16 figure

    Efficient Nonlinear Precoding for Massive MU-MIMO Downlink Systems with 1-Bit DACs

    Full text link
    The power consumption of digital-to-analog converters (DACs) constitutes a significant proportion of the total power consumption in a massive multiuser multiple-input multiple-output (MU-MIMO) base station (BS). Using 1-bit DACs can significantly reduce the power consumption. This paper addresses the precoding problem for the massive narrow-band MU-MIMO downlink system equipped with 1-bit DACs at each BS. In such a system, the precoding problem plays a central role as the precoded symbols are affected by extra distortion introduced by 1-bit DACs. In this paper, we develop a highly-efficient nonlinear precoding algorithm based on the alternative direction method framework. Unlike the classic algorithms, such as the semidefinite relaxation (SDR) and squared-infinity norm Douglas-Rachford splitting (SQUID) algorithms, which solve convex relaxed versions of the original precoding problem, the new algorithm solves the original nonconvex problem directly. The new algorithm is guaranteed to globally converge under some mild conditions. A sufficient condition for its convergence has been derived. Experimental results in various conditions demonstrated that, the new algorithm can achieve state-of-the-art accuracy comparable to the SDR algorithm, while being much more efficient (more than 300 times faster than the SDR algorithm).Comment: 12 pages, 7 figure

    Nonconvex Nonsmooth Low-Rank Minimization for Generalized Image Compressed Sensing via Group Sparse Representation

    Full text link
    Group sparse representation (GSR) based method has led to great successes in various image recovery tasks, which can be converted into a low-rank matrix minimization problem. As a widely used surrogate function of low-rank, the nuclear norm based convex surrogate usually leads to over-shrinking problem, since the standard soft-thresholding operator shrinks all singular values equally. To improve traditional sparse representation based image compressive sensing (CS) performance, we propose a generalized CS framework based on GSR model, which leads to a nonconvex nonsmooth low-rank minimization problem. The popular L_2-norm and M-estimator are employed for standard image CS and robust CS problem to fit the data respectively. For the better approximation of the rank of group-matrix, a family of nuclear norms are employed to address the over-shrinking problem. Moreover, we also propose a flexible and effective iteratively-weighting strategy to control the weighting and contribution of each singular value. Then we develop an iteratively reweighted nuclear norm algorithm for our generalized framework via an alternating direction method of multipliers framework, namely, GSR-AIR. Experimental results demonstrate that our proposed CS framework can achieve favorable reconstruction performance compared with current state-of-the-art methods and the robust CS framework can suppress the outliers effectively.Comment: This paper has been submitted to the Journal of the Franklin Institute. arXiv admin note: substantial text overlap with arXiv:1903.0978

    The Dantzig selector: Recovery of Signal via β„“1βˆ’Ξ±β„“2\ell_1-\alpha \ell_2 Minimization

    Full text link
    In the paper, we proposed the Dantzig selector based on the β„“1βˆ’Ξ±β„“2\ell_{1}-\alpha \ell_{2}~(0<α≀1)(0< \alpha \leq1) minimization for the signal recovery. In the Dantzig selector, the constraint βˆ₯A⊀(bβˆ’Ax)βˆ₯βˆžβ‰€Ξ·\|{\bf A}^{\top}({\bf b}-{\bf A}{\bf x})\|_\infty \leq \eta for some small constant Ξ·>0\eta>0 means the columns of A{\bf A} has very weakly correlated with the error vector e=Axβˆ’b{\bf e}={\bf A}{\bf x}-{\bf b}. First, recovery guarantees based on the restricted isometry property (RIP) are established for signals. Next, we propose the effective algorithm to solve the proposed Dantzig selector. Last, we illustrate the proposed model and algorithm by extensive numerical experiments for the recovery of signals in the cases of Gaussian, impulsive and uniform noise. And the performance of the proposed Dantzig selector is better than that of the existing methods

    Inertial Proximal ADMM for Separable Multi-Block Convex Optimizations and Compressive Affine Phase Retrieval

    Full text link
    Separable multi-block convex optimization problem appears in many mathematical and engineering fields. In the first part of this paper, we propose an inertial proximal ADMM to solve a linearly constrained separable multi-block convex optimization problem, and we show that the proposed inertial proximal ADMM has global convergence under mild assumptions on the regularization matrices. Affine phase retrieval arises in holography, data separation and phaseless sampling, and it is also considered as a nonhomogeneous version of phase retrieval that has received considerable attention in recent years. Inspired by convex relaxation of vector sparsity and matrix rank in compressive sensing and by phase lifting in phase retrieval, in the second part of this paper, we introduce a compressive affine phase retrieval via lifting approach to connect affine phase retrieval with multi-block convex optimization, and then based on the proposed inertial proximal ADMM for multi-block convex optimization, we propose an algorithm to recover sparse real signals from their (noisy) affine quadratic measurements. Our numerical simulations show that the proposed algorithm has satisfactory performance for affine phase retrieval of sparse real signals

    A Survey on Nonconvex Regularization Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning

    Full text link
    In the past decade, sparse and low-rank recovery have drawn much attention in many areas such as signal/image processing, statistics, bioinformatics and machine learning. To achieve sparsity and/or low-rankness inducing, the β„“1\ell_1 norm and nuclear norm are of the most popular regularization penalties due to their convexity. While the β„“1\ell_1 and nuclear norm are convenient as the related convex optimization problems are usually tractable, it has been shown in many applications that a nonconvex penalty can yield significantly better performance. In recent, nonconvex regularization based sparse and low-rank recovery is of considerable interest and it in fact is a main driver of the recent progress in nonconvex and nonsmooth optimization. This paper gives an overview of this topic in various fields in signal processing, statistics and machine learning, including compressive sensing (CS), sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA. We present recent developments of nonconvex regularization based sparse and low-rank recovery in these fields, addressing the issues of penalty selection, applications and the convergence of nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git.Comment: 22 page
    corecore