7 research outputs found

    On starting and stopping criteria for nested primal-dual iterations

    Full text link
    The importance of an adequate inner loop starting point (as opposed to a sufficient inner loop stopping rule) is discussed in the context of a numerical optimization algorithm consisting of nested primal-dual proximal-gradient iterations. While the number of inner iterations is fixed in advance, convergence of the whole algorithm is still guaranteed by virtue of a warm-start strategy for the inner loop, showing that inner loop "starting rules" can be just as effective as "stopping rules" for guaranteeing convergence. The algorithm itself is applicable to the numerical solution of convex optimization problems defined by the sum of a differentiable term and two possibly non-differentiable terms. One of the latter terms should take the form of the composition of a linear map and a proximable function, while the differentiable term needs an accessible gradient. The algorithm reduces to the classical proximal gradient algorithm in certain special cases and it also generalizes other existing algorithms. In addition, under some conditions of strong convexity, we show a linear rate of convergence.Comment: 18 pages, no figure

    On starting and stopping criteria for nested primal-dual iterations

    No full text
    The importance of an adequate inner loop starting point (as opposed to a sufficient inner loop stopping rule) is discussed in the context of a numerical optimization algorithm consisting of nested primal-dual proximal-gradient iterations. While the number of inner iterations is fixed in advance, convergence of the whole algorithm is still guaranteed by virtue of a warm-start strategy for the inner loop, showing that inner loop “starting rules” can be just as effective as “stopping rules” for guaranteeing convergence. The algorithm itself is applicable to the numerical solution of convex optimization problems defined by the sum of a differentiable term and two possibly non-differentiable terms. One of the latter terms should take the form of the composition of a linear map and a proximable function, while the differentiable term needs an accessible gradient. The algorithm reduces to the classical proximal gradient algorithm in certain special cases and it also generalizes other existing algorithms. In addition, under some conditions of strong convexity, we show a linear rate of convergence.SCOPUS: ar.jinfo:eu-repo/semantics/publishe

    On starting and stopping criteria for nested primal-dual iterations encountered in imaging

    No full text
    The importance of an adequate inner loop starting point (as opposed to an inner loop stopping rule) is discussed for a numerical optimization algorithm consisting of nested primal-dual proximal-gradient iterations. While the number of inner iterations is fixed in advance, convergence of the algorithm is guaranteed by virtue of an inner loop warm-start strategy, showing that inner loop ``starting rules" can be just as effective as ``stopping rules'' for guaranteeing convergence.info:eu-repo/semantics/nonPublishe

    On starting and stopping criteria for nested primal-dual iterations encountered in imaging

    No full text
    Some numerical optimization algorithms consisting of nested (double loop) iterations are proposed that can be used for the minimization of a convex cost function consisting of a sum of three parts: a differentiable part, a proximable part and the composition of a linear map with a proximable function. While the number of inner iterations is fixed in advance in these primal-dual algorithms, convergence of the algorithm is guaranteed by virtue of an inner loop warm-start strategy, showing that inner loop ``starting rules" can be just as effective as ``stopping rules'' for guaranteeing convergence. The algorithm itself is applicable to the numerical solution of convex optimization problems encountered in inverse problems, imaging and statistics. It reduces to the classical proximal gradient algorithm in certain special cases and it also generalizes other existing algorithms.info:eu-repo/semantics/nonPublishe
    corecore