623 research outputs found
A proximal iteration for deconvolving Poisson noisy images using sparse representations
We propose an image deconvolution algorithm when the data is contaminated by
Poisson noise. The image to restore is assumed to be sparsely represented in a
dictionary of waveforms such as the wavelet or curvelet transforms. Our key
contributions are: First, we handle the Poisson noise properly by using the
Anscombe variance stabilizing transform leading to a {\it non-linear}
degradation equation with additive Gaussian noise. Second, the deconvolution
problem is formulated as the minimization of a convex functional with a
data-fidelity term reflecting the noise properties, and a non-smooth
sparsity-promoting penalties over the image representation coefficients (e.g.
-norm). Third, a fast iterative backward-forward splitting algorithm is
proposed to solve the minimization problem. We derive existence and uniqueness
conditions of the solution, and establish convergence of the iterative
algorithm. Finally, a GCV-based model selection procedure is proposed to
objectively select the regularization parameter. Experimental results are
carried out to show the striking benefits gained from taking into account the
Poisson statistics of the noise. These results also suggest that using
sparse-domain regularization may be tractable in many deconvolution
applications with Poisson noise such as astronomy and microscopy
A new steplength selection for scaled gradient methods with application to image deblurring
Gradient methods are frequently used in large scale image deblurring problems
since they avoid the onerous computation of the Hessian matrix of the objective
function. Second order information is typically sought by a clever choice of
the steplength parameter defining the descent direction, as in the case of the
well-known Barzilai and Borwein rules. In a recent paper, a strategy for the
steplength selection approximating the inverse of some eigenvalues of the
Hessian matrix has been proposed for gradient methods applied to unconstrained
minimization problems. In the quadratic case, this approach is based on a
Lanczos process applied every m iterations to the matrix of the most recent m
back gradients but the idea can be extended to a general objective function. In
this paper we extend this rule to the case of scaled gradient projection
methods applied to non-negatively constrained minimization problems, and we
test the effectiveness of the proposed strategy in image deblurring problems in
both the presence and the absence of an explicit edge-preserving regularization
term
New convergence results for the scaled gradient projection method
The aim of this paper is to deepen the convergence analysis of the scaled
gradient projection (SGP) method, proposed by Bonettini et al. in a recent
paper for constrained smooth optimization. The main feature of SGP is the
presence of a variable scaling matrix multiplying the gradient, which may
change at each iteration. In the last few years, an extensive numerical
experimentation showed that SGP equipped with a suitable choice of the scaling
matrix is a very effective tool for solving large scale variational problems
arising in image and signal processing. In spite of the very reliable numerical
results observed, only a weak, though very general, convergence theorem is
provided, establishing that any limit point of the sequence generated by SGP is
stationary. Here, under the only assumption that the objective function is
convex and that a solution exists, we prove that the sequence generated by SGP
converges to a minimum point, if the scaling matrices sequence satisfies a
simple and implementable condition. Moreover, assuming that the gradient of the
objective function is Lipschitz continuous, we are also able to prove the
O(1/k) convergence rate with respect to the objective function values. Finally,
we present the results of a numerical experience on some relevant image
restoration problems, showing that the proposed scaling matrix selection rule
performs well also from the computational point of view
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
We consider a variable metric linesearch based proximal gradient method for
the minimization of the sum of a smooth, possibly nonconvex function plus a
convex, possibly nonsmooth term. We prove convergence of this iterative
algorithm to a critical point if the objective function satisfies the
Kurdyka-Lojasiewicz property at each point of its domain, under the assumption
that a limit point exists. The proposed method is applied to a wide collection
of image processing problems and our numerical tests show that our algorithm
results to be flexible, robust and competitive when compared to recently
proposed approaches able to address the optimization problems arising in the
considered applications
Restoration of Poissonian Images Using Alternating Direction Optimization
Much research has been devoted to the problem of restoring Poissonian images,
namely for medical and astronomical applications. However, the restoration of
these images using state-of-the-art regularizers (such as those based on
multiscale representations or total variation) is still an active research
area, since the associated optimization problems are quite challenging. In this
paper, we propose an approach to deconvolving Poissonian images, which is based
on an alternating direction optimization method. The standard regularization
(or maximum a posteriori) restoration criterion, which combines the Poisson
log-likelihood with a (non-smooth) convex regularizer (log-prior), leads to
hard optimization problems: the log-likelihood is non-quadratic and
non-separable, the regularizer is non-smooth, and there is a non-negativity
constraint. Using standard convex analysis tools, we present sufficient
conditions for existence and uniqueness of solutions of these optimization
problems, for several types of regularizers: total-variation, frame-based
analysis, and frame-based synthesis. We attack these problems with an instance
of the alternating direction method of multipliers (ADMM), which belongs to the
family of augmented Lagrangian algorithms. We study sufficient conditions for
convergence and show that these are satisfied, either under total-variation or
frame-based (analysis and synthesis) regularization. The resulting algorithms
are shown to outperform alternative state-of-the-art methods, both in terms of
speed and restoration accuracy.Comment: 12 pages, 12 figures, 2 tables. Submitted to the IEEE Transactions on
Image Processin
- …