2,526 research outputs found
On starting and stopping criteria for nested primal-dual iterations
The importance of an adequate inner loop starting point (as opposed to a
sufficient inner loop stopping rule) is discussed in the context of a numerical
optimization algorithm consisting of nested primal-dual proximal-gradient
iterations. While the number of inner iterations is fixed in advance,
convergence of the whole algorithm is still guaranteed by virtue of a
warm-start strategy for the inner loop, showing that inner loop "starting
rules" can be just as effective as "stopping rules" for guaranteeing
convergence. The algorithm itself is applicable to the numerical solution of
convex optimization problems defined by the sum of a differentiable term and
two possibly non-differentiable terms. One of the latter terms should take the
form of the composition of a linear map and a proximable function, while the
differentiable term needs an accessible gradient. The algorithm reduces to the
classical proximal gradient algorithm in certain special cases and it also
generalizes other existing algorithms. In addition, under some conditions of
strong convexity, we show a linear rate of convergence.Comment: 18 pages, no figure
Linear inverse problems with noise: primal and primal-dual splitting
In this paper, we propose two algorithms for solving linear inverse problems
when the observations are corrupted by noise. A proper data fidelity term
(log-likelihood) is introduced to reflect the statistics of the noise (e.g.
Gaussian, Poisson). On the other hand, as a prior, the images to restore are
assumed to be positive and sparsely represented in a dictionary of waveforms.
Piecing together the data fidelity and the prior terms, the solution to the
inverse problem is cast as the minimization of a non-smooth convex functional.
We establish the well-posedness of the optimization problem, characterize the
corresponding minimizers, and solve it by means of primal and primal-dual
proximal splitting algorithms originating from the field of non-smooth convex
optimization theory. Experimental results on deconvolution, inpainting and
denoising with some comparison to prior methods are also reported
Inverse Problems with Poisson noise: Primal and Primal-Dual Splitting
In this paper, we propose two algorithms for solving linear inverse problems
when the observations are corrupted by Poisson noise. A proper data fidelity
term (log-likelihood) is introduced to reflect the Poisson statistics of the
noise. On the other hand, as a prior, the images to restore are assumed to be
positive and sparsely represented in a dictionary of waveforms. Piecing
together the data fidelity and the prior terms, the solution to the inverse
problem is cast as the minimization of a non-smooth convex functional. We
establish the well-posedness of the optimization problem, characterize the
corresponding minimizers, and solve it by means of primal and primal-dual
proximal splitting algorithms originating from the field of non-smooth convex
optimization theory. Experimental results on deconvolution and comparison to
prior methods are also reported
Undersampled Phase Retrieval with Outliers
We propose a general framework for reconstructing transform-sparse images
from undersampled (squared)-magnitude data corrupted with outliers. This
framework is implemented using a multi-layered approach, combining multiple
initializations (to address the nonconvexity of the phase retrieval problem),
repeated minimization of a convex majorizer (surrogate for a nonconvex
objective function), and iterative optimization using the alternating
directions method of multipliers. Exploiting the generality of this framework,
we investigate using a Laplace measurement noise model better adapted to
outliers present in the data than the conventional Gaussian noise model. Using
simulations, we explore the sensitivity of the method to both the
regularization and penalty parameters. We include 1D Monte Carlo and 2D image
reconstruction comparisons with alternative phase retrieval algorithms. The
results suggest the proposed method, with the Laplace noise model, both
increases the likelihood of correct support recovery and reduces the mean
squared error from measurements containing outliers. We also describe exciting
extensions made possible by the generality of the proposed framework, including
regularization using analysis-form sparsity priors that are incompatible with
many existing approaches.Comment: 11 pages, 9 figure
- …