1,538 research outputs found
A new steplength selection for scaled gradient methods with application to image deblurring
Gradient methods are frequently used in large scale image deblurring problems
since they avoid the onerous computation of the Hessian matrix of the objective
function. Second order information is typically sought by a clever choice of
the steplength parameter defining the descent direction, as in the case of the
well-known Barzilai and Borwein rules. In a recent paper, a strategy for the
steplength selection approximating the inverse of some eigenvalues of the
Hessian matrix has been proposed for gradient methods applied to unconstrained
minimization problems. In the quadratic case, this approach is based on a
Lanczos process applied every m iterations to the matrix of the most recent m
back gradients but the idea can be extended to a general objective function. In
this paper we extend this rule to the case of scaled gradient projection
methods applied to non-negatively constrained minimization problems, and we
test the effectiveness of the proposed strategy in image deblurring problems in
both the presence and the absence of an explicit edge-preserving regularization
term
BM3D Frames and Variational Image Deblurring
A family of the Block Matching 3-D (BM3D) algorithms for various imaging
problems has been recently proposed within the framework of nonlocal patch-wise
image modeling [1], [2]. In this paper we construct analysis and synthesis
frames, formalizing the BM3D image modeling and use these frames to develop
novel iterative deblurring algorithms. We consider two different formulations
of the deblurring problem: one given by minimization of the single objective
function and another based on the Nash equilibrium balance of two objective
functions. The latter results in an algorithm where the denoising and
deblurring operations are decoupled. The convergence of the developed
algorithms is proved. Simulation experiments show that the decoupled algorithm
derived from the Nash equilibrium formulation demonstrates the best numerical
and visual results and shows superiority with respect to the state of the art
in the field, confirming a valuable potential of BM3D-frames as an advanced
image modeling tool.Comment: Submitted to IEEE Transactions on Image Processing on May 18, 2011.
implementation of the proposed algorithm is available as part of the BM3D
package at http://www.cs.tut.fi/~foi/GCF-BM3
Multiplicative Noise Removal Using Variable Splitting and Constrained Optimization
Multiplicative noise (also known as speckle noise) models are central to the
study of coherent imaging systems, such as synthetic aperture radar and sonar,
and ultrasound and laser imaging. These models introduce two additional layers
of difficulties with respect to the standard Gaussian additive noise scenario:
(1) the noise is multiplied by (rather than added to) the original image; (2)
the noise is not Gaussian, with Rayleigh and Gamma being commonly used
densities. These two features of multiplicative noise models preclude the
direct application of most state-of-the-art algorithms, which are designed for
solving unconstrained optimization problems where the objective has two terms:
a quadratic data term (log-likelihood), reflecting the additive and Gaussian
nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a
total variation or wavelet-based regularizer/prior). In this paper, we address
these difficulties by: (1) converting the multiplicative model into an additive
one by taking logarithms, as proposed by some other authors; (2) using variable
splitting to obtain an equivalent constrained problem; and (3) dealing with
this optimization problem using the augmented Lagrangian framework. A set of
experiments shows that the proposed method, which we name MIDAL (multiplicative
image denoising by augmented Lagrangian), yields state-of-the-art results both
in terms of speed and denoising performance.Comment: 11 pages, 7 figures, 2 tables. To appear in the IEEE Transactions on
Image Processing
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Proof of Convergence and Performance Analysis for Sparse Recovery via Zero-point Attracting Projection
A recursive algorithm named Zero-point Attracting Projection (ZAP) is
proposed recently for sparse signal reconstruction. Compared with the reference
algorithms, ZAP demonstrates rather good performance in recovery precision and
robustness. However, any theoretical analysis about the mentioned algorithm,
even a proof on its convergence, is not available. In this work, a strict proof
on the convergence of ZAP is provided and the condition of convergence is put
forward. Based on the theoretical analysis, it is further proved that ZAP is
non-biased and can approach the sparse solution to any extent, with the proper
choice of step-size. Furthermore, the case of inaccurate measurements in noisy
scenario is also discussed. It is proved that disturbance power linearly
reduces the recovery precision, which is predictable but not preventable. The
reconstruction deviation of -compressible signal is also provided. Finally,
numerical simulations are performed to verify the theoretical analysis.Comment: 29 pages, 6 figure
Truncated Nuclear Norm Minimization for Image Restoration Based On Iterative Support Detection
Recovering a large matrix from limited measurements is a challenging task
arising in many real applications, such as image inpainting, compressive
sensing and medical imaging, and this kind of problems are mostly formulated as
low-rank matrix approximation problems. Due to the rank operator being
non-convex and discontinuous, most of the recent theoretical studies use the
nuclear norm as a convex relaxation and the low-rank matrix recovery problem is
solved through minimization of the nuclear norm regularized problem. However, a
major limitation of nuclear norm minimization is that all the singular values
are simultaneously minimized and the rank may not be well approximated
\cite{hu2012fast}. Correspondingly, in this paper, we propose a new multi-stage
algorithm, which makes use of the concept of Truncated Nuclear Norm
Regularization (TNNR) proposed in \citep{hu2012fast} and Iterative Support
Detection (ISD) proposed in \citep{wang2010sparse} to overcome the above
limitation. Besides matrix completion problems considered in
\citep{hu2012fast}, the proposed method can be also extended to the general
low-rank matrix recovery problems. Extensive experiments well validate the
superiority of our new algorithms over other state-of-the-art methods
- …