2,302 research outputs found
Graded quantization for multiple description coding of compressive measurements
Compressed sensing (CS) is an emerging paradigm for acquisition of compressed
representations of a sparse signal. Its low complexity is appealing for
resource-constrained scenarios like sensor networks. However, such scenarios
are often coupled with unreliable communication channels and providing robust
transmission of the acquired data to a receiver is an issue. Multiple
description coding (MDC) effectively combats channel losses for systems without
feedback, thus raising the interest in developing MDC methods explicitly
designed for the CS framework, and exploiting its properties. We propose a
method called Graded Quantization (CS-GQ) that leverages the democratic
property of compressive measurements to effectively implement MDC, and we
provide methods to optimize its performance. A novel decoding algorithm based
on the alternating directions method of multipliers is derived to reconstruct
signals from a limited number of received descriptions. Simulations are
performed to assess the performance of CS-GQ against other methods in presence
of packet losses. The proposed method is successful at providing robust coding
of CS measurements and outperforms other schemes for the considered test
metrics
A fast patch-dictionary method for whole image recovery
Various algorithms have been proposed for dictionary learning. Among those
for image processing, many use image patches to form dictionaries. This paper
focuses on whole-image recovery from corrupted linear measurements. We address
the open issue of representing an image by overlapping patches: the overlapping
leads to an excessive number of dictionary coefficients to determine. With very
few exceptions, this issue has limited the applications of image-patch methods
to the local kind of tasks such as denoising, inpainting, cartoon-texture
decomposition, super-resolution, and image deblurring, for which one can
process a few patches at a time. Our focus is global imaging tasks such as
compressive sensing and medical image recovery, where the whole image is
encoded together, making it either impossible or very ineffective to update a
few patches at a time.
Our strategy is to divide the sparse recovery into multiple subproblems, each
of which handles a subset of non-overlapping patches, and then the results of
the subproblems are averaged to yield the final recovery. This simple strategy
is surprisingly effective in terms of both quality and speed. In addition, we
accelerate computation of the learned dictionary by applying a recent block
proximal-gradient method, which not only has a lower per-iteration complexity
but also takes fewer iterations to converge, compared to the current
state-of-the-art. We also establish that our algorithm globally converges to a
stationary point. Numerical results on synthetic data demonstrate that our
algorithm can recover a more faithful dictionary than two state-of-the-art
methods.
Combining our whole-image recovery and dictionary-learning methods, we
numerically simulate image inpainting, compressive sensing recovery, and
deblurring. Our recovery is more faithful than those of a total variation
method and a method based on overlapping patches
Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted Nuclear Norm
The nuclear norm is widely used as a convex surrogate of the rank function in
compressive sensing for low rank matrix recovery with its applications in image
recovery and signal processing. However, solving the nuclear norm based relaxed
convex problem usually leads to a suboptimal solution of the original rank
minimization problem. In this paper, we propose to perform a family of
nonconvex surrogates of -norm on the singular values of a matrix to
approximate the rank function. This leads to a nonconvex nonsmooth minimization
problem. Then we propose to solve the problem by Iteratively Reweighted Nuclear
Norm (IRNN) algorithm. IRNN iteratively solves a Weighted Singular Value
Thresholding (WSVT) problem, which has a closed form solution due to the
special properties of the nonconvex surrogate functions. We also extend IRNN to
solve the nonconvex problem with two or more blocks of variables. In theory, we
prove that IRNN decreases the objective function value monotonically, and any
limit point is a stationary point. Extensive experiments on both synthesized
data and real images demonstrate that IRNN enhances the low-rank matrix
recovery compared with state-of-the-art convex algorithms
- …