13,643 research outputs found
Continuous Primal-Dual Methods for Image Processing
International audienceIn this article we study a continuous Primal-Dual method proposed by Appleton and Talbot and generalize it to other problems in image processing. We interpret it as an Arrow-Hurwicz method which leads to a better description of the system of PDEs obtained. We show existence and uniqueness of solutions and get a convergence result for the denoising problem. Our analysis also yields new a posteriori estimates
Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems
Optimization methods are at the core of many problems in signal/image
processing, computer vision, and machine learning. For a long time, it has been
recognized that looking at the dual of an optimization problem may drastically
simplify its solution. Deriving efficient strategies which jointly brings into
play the primal and the dual problems is however a more recent idea which has
generated many important new contributions in the last years. These novel
developments are grounded on recent advances in convex analysis, discrete
optimization, parallel processing, and non-smooth optimization with emphasis on
sparsity issues. In this paper, we aim at presenting the principles of
primal-dual approaches, while giving an overview of numerical methods which
have been proposed in different contexts. We show the benefits which can be
drawn from primal-dual algorithms both for solving large-scale convex
optimization problems and discrete ones, and we provide various application
examples to illustrate their usefulness
On starting and stopping criteria for nested primal-dual iterations
The importance of an adequate inner loop starting point (as opposed to a
sufficient inner loop stopping rule) is discussed in the context of a numerical
optimization algorithm consisting of nested primal-dual proximal-gradient
iterations. While the number of inner iterations is fixed in advance,
convergence of the whole algorithm is still guaranteed by virtue of a
warm-start strategy for the inner loop, showing that inner loop "starting
rules" can be just as effective as "stopping rules" for guaranteeing
convergence. The algorithm itself is applicable to the numerical solution of
convex optimization problems defined by the sum of a differentiable term and
two possibly non-differentiable terms. One of the latter terms should take the
form of the composition of a linear map and a proximable function, while the
differentiable term needs an accessible gradient. The algorithm reduces to the
classical proximal gradient algorithm in certain special cases and it also
generalizes other existing algorithms. In addition, under some conditions of
strong convexity, we show a linear rate of convergence.Comment: 18 pages, no figure
- …