4,553 research outputs found
From Rank Estimation to Rank Approximation: Rank Residual Constraint for Image Restoration
In this paper, we propose a novel approach to the rank minimization problem,
termed rank residual constraint (RRC) model. Different from existing low-rank
based approaches, such as the well-known nuclear norm minimization (NNM) and
the weighted nuclear norm minimization (WNNM), which estimate the underlying
low-rank matrix directly from the corrupted observations, we progressively
approximate the underlying low-rank matrix via minimizing the rank residual.
Through integrating the image nonlocal self-similarity (NSS) prior with the
proposed RRC model, we apply it to image restoration tasks, including image
denoising and image compression artifacts reduction. Towards this end, we first
obtain a good reference of the original image groups by using the image NSS
prior, and then the rank residual of the image groups between this reference
and the degraded image is minimized to achieve a better estimate to the desired
image. In this manner, both the reference and the estimated image are updated
gradually and jointly in each iteration. Based on the group-based sparse
representation model, we further provide a theoretical analysis on the
feasibility of the proposed RRC model. Experimental results demonstrate that
the proposed RRC model outperforms many state-of-the-art schemes in both the
objective and perceptual quality
From Sparse Signals to Sparse Residuals for Robust Sensing
One of the key challenges in sensor networks is the extraction of information
by fusing data from a multitude of distinct, but possibly unreliable sensors.
Recovering information from the maximum number of dependable sensors while
specifying the unreliable ones is critical for robust sensing. This sensing
task is formulated here as that of finding the maximum number of feasible
subsystems of linear equations, and proved to be NP-hard. Useful links are
established with compressive sampling, which aims at recovering vectors that
are sparse. In contrast, the signals here are not sparse, but give rise to
sparse residuals. Capitalizing on this form of sparsity, four sensing schemes
with complementary strengths are developed. The first scheme is a convex
relaxation of the original problem expressed as a second-order cone program
(SOCP). It is shown that when the involved sensing matrices are Gaussian and
the reliable measurements are sufficiently many, the SOCP can recover the
optimal solution with overwhelming probability. The second scheme is obtained
by replacing the initial objective function with a concave one. The third and
fourth schemes are tailored for noisy sensor data. The noisy case is cast as a
combinatorial problem that is subsequently surrogated by a (weighted) SOCP.
Interestingly, the derived cost functions fall into the framework of robust
multivariate linear regression, while an efficient block-coordinate descent
algorithm is developed for their minimization. The robust sensing capabilities
of all schemes are verified by simulated tests.Comment: Under review for publication in the IEEE Transactions on Signal
Processing (revised version
Sparse Multivariate Factor Regression
We consider the problem of multivariate regression in a setting where the
relevant predictors could be shared among different responses. We propose an
algorithm which decomposes the coefficient matrix into the product of a long
matrix and a wide matrix, with an elastic net penalty on the former and an
penalty on the latter. The first matrix linearly transforms the
predictors to a set of latent factors, and the second one regresses the
responses on these factors. Our algorithm simultaneously performs dimension
reduction and coefficient estimation and automatically estimates the number of
latent factors from the data. Our formulation results in a non-convex
optimization problem, which despite its flexibility to impose effective
low-dimensional structure, is difficult, or even impossible, to solve exactly
in a reasonable time. We specify an optimization algorithm based on alternating
minimization with three different sets of updates to solve this non-convex
problem and provide theoretical results on its convergence and optimality.
Finally, we demonstrate the effectiveness of our algorithm via experiments on
simulated and real data
- …