7,166 research outputs found
"Plug-and-Play" Edge-Preserving Regularization
In many inverse problems it is essential to use regularization methods that
preserve edges in the reconstructions, and many reconstruction models have been
developed for this task, such as the Total Variation (TV) approach. The
associated algorithms are complex and require a good knowledge of large-scale
optimization algorithms, and they involve certain tolerances that the user must
choose. We present a simpler approach that relies only on standard
computational building blocks in matrix computations, such as orthogonal
transformations, preconditioned iterative solvers, Kronecker products, and the
discrete cosine transform -- hence the term "plug-and-play." We do not attempt
to improve on TV reconstructions, but rather provide an easy-to-use approach to
computing reconstructions with similar properties.Comment: 14 pages, 7 figures, 3 table
Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted Nuclear Norm
The nuclear norm is widely used as a convex surrogate of the rank function in
compressive sensing for low rank matrix recovery with its applications in image
recovery and signal processing. However, solving the nuclear norm based relaxed
convex problem usually leads to a suboptimal solution of the original rank
minimization problem. In this paper, we propose to perform a family of
nonconvex surrogates of -norm on the singular values of a matrix to
approximate the rank function. This leads to a nonconvex nonsmooth minimization
problem. Then we propose to solve the problem by Iteratively Reweighted Nuclear
Norm (IRNN) algorithm. IRNN iteratively solves a Weighted Singular Value
Thresholding (WSVT) problem, which has a closed form solution due to the
special properties of the nonconvex surrogate functions. We also extend IRNN to
solve the nonconvex problem with two or more blocks of variables. In theory, we
prove that IRNN decreases the objective function value monotonically, and any
limit point is a stationary point. Extensive experiments on both synthesized
data and real images demonstrate that IRNN enhances the low-rank matrix
recovery compared with state-of-the-art convex algorithms
Distributed estimation from relative measurements of heterogeneous and uncertain quality
This paper studies the problem of estimation from relative measurements in a
graph, in which a vector indexed over the nodes has to be reconstructed from
pairwise measurements of differences between its components associated to nodes
connected by an edge. In order to model heterogeneity and uncertainty of the
measurements, we assume them to be affected by additive noise distributed
according to a Gaussian mixture. In this original setup, we formulate the
problem of computing the Maximum-Likelihood (ML) estimates and we design two
novel algorithms, based on Least Squares regression and
Expectation-Maximization (EM). The first algorithm (LS- EM) is centralized and
performs the estimation from relative measurements, the soft classification of
the measurements, and the estimation of the noise parameters. The second
algorithm (Distributed LS-EM) is distributed and performs estimation and soft
classification of the measurements, but requires the knowledge of the noise
parameters. We provide rigorous proofs of convergence of both algorithms and we
present numerical experiments to evaluate and compare their performance with
classical solutions. The experiments show the robustness of the proposed
methods against different kinds of noise and, for the Distributed LS-EM,
against errors in the knowledge of noise parameters.Comment: Submitted to IEEE transaction
- …