157 research outputs found
Non-Convex Weighted Lp Nuclear Norm based ADMM Framework for Image Restoration
Since the matrix formed by nonlocal similar patches in a natural image is of
low rank, the nuclear norm minimization (NNM) has been widely used in various
image processing studies. Nonetheless, nuclear norm based convex surrogate of
the rank function usually over-shrinks the rank components and makes different
components equally, and thus may produce a result far from the optimum. To
alleviate the above-mentioned limitations of the nuclear norm, in this paper we
propose a new method for image restoration via the non-convex weighted Lp
nuclear norm minimization (NCW-NNM), which is able to more accurately enforce
the image structural sparsity and self-similarity simultaneously. To make the
proposed model tractable and robust, the alternative direction multiplier
method (ADMM) is adopted to solve the associated non-convex minimization
problem. Experimental results on various types of image restoration problems,
including image deblurring, image inpainting and image compressive sensing (CS)
recovery, demonstrate that the proposed method outperforms many current
state-of-the-art methods in both the objective and the perceptual qualities.Comment: arXiv admin note: text overlap with arXiv:1611.0898
From Group Sparse Coding to Rank Minimization: A Novel Denoising Model for Low-level Image Restoration
Recently, low-rank matrix recovery theory has been emerging as a significant
progress for various image processing problems. Meanwhile, the group sparse
coding (GSC) theory has led to great successes in image restoration (IR)
problem with each group contains low-rank property. In this paper, we propose a
novel low-rank minimization based denoising model for IR tasks under the
perspective of GSC, an important connection between our denoising model and
rank minimization problem has been put forward. To overcome the bias problem
caused by convex nuclear norm minimization (NNM) for rank approximation, a more
generalized and flexible rank relaxation function is employed, namely weighted
nonconvex relaxation. Accordingly, an efficient iteratively-reweighted
algorithm is proposed to handle the resulting minimization problem combing with
the popular L_(1/2) and L_(2/3) thresholding operators. Finally, our proposed
denoising model is applied to IR problems via an alternating direction method
of multipliers (ADMM) strategy. Typical IR experiments on image compressive
sensing (CS), inpainting, deblurring and impulsive noise removal demonstrate
that our proposed method can achieve significantly higher PSNR/FSIM values than
many relevant state-of-the-art methods.Comment: Accepted by Signal Processin
Edge-adaptive l2 regularization image reconstruction from non-uniform Fourier data
Total variation regularization based on the l1 norm is ubiquitous in image
reconstruction. However, the resulting reconstructions are not always as sparse
in the edge domain as desired. Iteratively reweighted methods provide some
improvement in accuracy, but at the cost of extended runtime. In this paper we
examine these methods for the case of data acquired as non-uniform Fourier
samples. We then develop a non-iterative weighted regularization method that
uses a pre-processing edge detection to find exactly where the sparsity should
be in the edge domain. We show that its performance in terms of both accuracy
and speed has the potential to outperform reweighted TV regularization methods
Group-based Sparse Representation for Image Compressive Sensing Reconstruction with Non-Convex Regularization
Patch-based sparse representation modeling has shown great potential in image
compressive sensing (CS) reconstruction. However, this model usually suffers
from some limits, such as dictionary learning with great computational
complexity, neglecting the relationship among similar patches. In this paper, a
group-based sparse representation method with non-convex regularization
(GSR-NCR) for image CS reconstruction is proposed. In GSR-NCR, the local
sparsity and nonlocal self-similarity of images is simultaneously considered in
a unified framework. Different from the previous methods based on
sparsity-promoting convex regularization, we extend the non-convex weighted Lp
(0 < p < 1) penalty function on group sparse coefficients of the data matrix,
rather than conventional L1-based regularization. To reduce the computational
complexity, instead of learning the dictionary with a high computational
complexity from natural images, we learn the principle component analysis (PCA)
based dictionary for each group. Moreover, to make the proposed scheme
tractable and robust, we have developed an efficient iterative
shrinkage/thresholding algorithm to solve the non-convex optimization problem.
Experimental results demonstrate that the proposed method outperforms many
state-of-the-art techniques for image CS reconstruction
Nonconvex Nonsmooth Low-Rank Minimization for Generalized Image Compressed Sensing via Group Sparse Representation
Group sparse representation (GSR) based method has led to great successes in
various image recovery tasks, which can be converted into a low-rank matrix
minimization problem. As a widely used surrogate function of low-rank, the
nuclear norm based convex surrogate usually leads to over-shrinking problem,
since the standard soft-thresholding operator shrinks all singular values
equally. To improve traditional sparse representation based image compressive
sensing (CS) performance, we propose a generalized CS framework based on GSR
model, which leads to a nonconvex nonsmooth low-rank minimization problem. The
popular L_2-norm and M-estimator are employed for standard image CS and robust
CS problem to fit the data respectively. For the better approximation of the
rank of group-matrix, a family of nuclear norms are employed to address the
over-shrinking problem. Moreover, we also propose a flexible and effective
iteratively-weighting strategy to control the weighting and contribution of
each singular value. Then we develop an iteratively reweighted nuclear norm
algorithm for our generalized framework via an alternating direction method of
multipliers framework, namely, GSR-AIR. Experimental results demonstrate that
our proposed CS framework can achieve favorable reconstruction performance
compared with current state-of-the-art methods and the robust CS framework can
suppress the outliers effectively.Comment: This paper has been submitted to the Journal of the Franklin
Institute. arXiv admin note: substantial text overlap with arXiv:1903.0978
Decomposition into Low-rank plus Additive Matrices for Background/Foreground Separation: A Review for a Comparative Evaluation with a Large-Scale Dataset
Recent research on problem formulations based on decomposition into low-rank
plus sparse matrices shows a suitable framework to separate moving objects from
the background. The most representative problem formulation is the Robust
Principal Component Analysis (RPCA) solved via Principal Component Pursuit
(PCP) which decomposes a data matrix in a low-rank matrix and a sparse matrix.
However, similar robust implicit or explicit decompositions can be made in the
following problem formulations: Robust Non-negative Matrix Factorization
(RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust
Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal
of these similar problem formulations is to obtain explicitly or implicitly a
decomposition into low-rank matrix plus additive matrices. In this context,
this work aims to initiate a rigorous and comprehensive review of the similar
problem formulations in robust subspace learning and tracking based on
decomposition into low-rank plus additive matrices for testing and ranking
existing algorithms for background/foreground separation. For this, we first
provide a preliminary review of the recent developments in the different
problem formulations which allows us to define a unified view that we called
Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine
carefully each method in each robust subspace learning/tracking frameworks with
their decomposition, their loss functions, their optimization problem and their
solvers. Furthermore, we investigate if incremental algorithms and real-time
implementations can be achieved for background/foreground separation. Finally,
experimental results on a large-scale dataset called Background Models
Challenge (BMC 2012) show the comparative performance of 32 different robust
subspace learning/tracking methods.Comment: 121 pages, 5 figures, submitted to Computer Science Review. arXiv
admin note: text overlap with arXiv:1312.7167, arXiv:1109.6297,
arXiv:1207.3438, arXiv:1105.2126, arXiv:1404.7592, arXiv:1210.0805,
arXiv:1403.8067 by other authors, Computer Science Review, November 201
Non-Convex Weighted Lp Minimization based Group Sparse Representation Framework for Image Denoising
Nonlocal image representation or group sparsity has attracted considerable
interest in various low-level vision tasks and has led to several
state-of-the-art image denoising techniques, such as BM3D, LSSC. In the past,
convex optimization with sparsity-promoting convex regularization was usually
regarded as a standard scheme for estimating sparse signals in noise. However,
using convex regularization can not still obtain the correct sparsity solution
under some practical problems including image inverse problems. In this paper
we propose a non-convex weighted minimization based group sparse
representation (GSR) framework for image denoising. To make the proposed scheme
tractable and robust, the generalized soft-thresholding (GST) algorithm is
adopted to solve the non-convex minimization problem. In addition, to
improve the accuracy of the nonlocal similar patches selection, an adaptive
patch search (APS) scheme is proposed. Experimental results have demonstrated
that the proposed approach not only outperforms many state-of-the-art denoising
methods such as BM3D and WNNM, but also results in a competitive speed
A Survey on Nonconvex Regularization Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning
In the past decade, sparse and low-rank recovery have drawn much attention in
many areas such as signal/image processing, statistics, bioinformatics and
machine learning. To achieve sparsity and/or low-rankness inducing, the
norm and nuclear norm are of the most popular regularization penalties
due to their convexity. While the and nuclear norm are convenient as
the related convex optimization problems are usually tractable, it has been
shown in many applications that a nonconvex penalty can yield significantly
better performance. In recent, nonconvex regularization based sparse and
low-rank recovery is of considerable interest and it in fact is a main driver
of the recent progress in nonconvex and nonsmooth optimization. This paper
gives an overview of this topic in various fields in signal processing,
statistics and machine learning, including compressive sensing (CS), sparse
regression and variable selection, sparse signals separation, sparse principal
component analysis (PCA), large covariance and inverse covariance matrices
estimation, matrix completion, and robust PCA. We present recent developments
of nonconvex regularization based sparse and low-rank recovery in these fields,
addressing the issues of penalty selection, applications and the convergence of
nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git.Comment: 22 page
Compressive Sensing of Color Images Using Nonlocal Higher Order Dictionary
This paper addresses an ill-posed problem of recovering a color image from
its compressively sensed measurement data. Differently from the typical 1D
vector-based approach of the state-of-the-art methods, we exploit the nonlocal
similarities inherently existing in images by treating each patch of a color
image as a 3D tensor consisting of not only horizontal and vertical but also
spectral dimensions. A group of nonlocal similar patches form a 4D tensor for
which a nonlocal higher order dictionary is learned via higher order singular
value decomposition. The multiple sub-dictionaries contained in the higher
order dictionary decorrelate the group in each corresponding dimension, thus
help the detail of color images to be reconstructed better. Furthermore, we
promote sparsity of the final solution using a sparsity regularization based on
a weight tensor. It can distinguish those coefficients of the sparse
representation generated by the higher order dictionary which are expected to
have large magnitude from the others in the optimization. Accordingly, in the
iterative solution, it acts like a weighting process which is designed by
approximating the minimum mean squared error filter for more faithful recovery.
Experimental results confirm improvement by the proposed method over the
state-of-the-art ones.Comment: 13 pages, 10 figure
Robust and Low-Rank Representation for Fast Face Identification with Occlusions
In this paper we propose an iterative method to address the face
identification problem with block occlusions. Our approach utilizes a robust
representation based on two characteristics in order to model contiguous errors
(e.g., block occlusion) effectively. The first fits to the errors a
distribution described by a tailored loss function. The second describes the
error image as having a specific structure (resulting in low-rank in comparison
to image size). We will show that this joint characterization is effective for
describing errors with spatial continuity. Our approach is computationally
efficient due to the utilization of the Alternating Direction Method of
Multipliers (ADMM). A special case of our fast iterative algorithm leads to the
robust representation method which is normally used to handle non-contiguous
errors (e.g., pixel corruption). Extensive results on representative face
databases (in constrained and unconstrained environments) document the
effectiveness of our method over existing robust representation methods with
respect to both identification rates and computational time.
Code is available at Github, where you can find implementations of the
F-LR-IRNNLS and F-IRNNLS (fast version of the RRC) :
https://github.com/miliadis/FIRCComment: IEEE Transactions on Image Processing (TIP), 201
- …