24 research outputs found

    An optimal subgradient algorithm for large-scale convex optimization in simple domains

    Full text link
    This paper shows that the optimal subgradient algorithm, OSGA, proposed in \cite{NeuO} can be used for solving structured large-scale convex constrained optimization problems. Only first-order information is required, and the optimal complexity bounds for both smooth and nonsmooth problems are attained. More specifically, we consider two classes of problems: (i) a convex objective with a simple closed convex domain, where the orthogonal projection on this feasible domain is efficiently available; (ii) a convex objective with a simple convex functional constraint. If we equip OSGA with an appropriate prox-function, the OSGA subproblem can be solved either in a closed form or by a simple iterative scheme, which is especially important for large-scale problems. We report numerical results for some applications to show the efficiency of the proposed scheme. A software package implementing OSGA for above domains is available

    ACQUIRE: an inexact iteratively reweighted norm approach for TV-based Poisson image restoration

    Full text link
    We propose a method, called ACQUIRE, for the solution of constrained optimization problems modeling the restoration of images corrupted by Poisson noise. The objective function is the sum of a generalized Kullback-Leibler divergence term and a TV regularizer, subject to nonnegativity and possibly other constraints, such as flux conservation. ACQUIRE is a line-search method that considers a smoothed version of TV, based on a Huber-like function, and computes the search directions by minimizing quadratic approximations of the problem, built by exploiting some second-order information. A classical second-order Taylor approximation is used for the Kullback-Leibler term and an iteratively reweighted norm approach for the smoothed TV term. We prove that the sequence generated by the method has a subsequence converging to a minimizer of the smoothed problem and any limit point is a minimizer. Furthermore, if the problem is strictly convex, the whole sequence is convergent. We note that convergence is achieved without requiring the exact minimization of the quadratic subproblems; low accuracy in this minimization can be used in practice, as shown by numerical results. Experiments on reference test problems show that our method is competitive with well-established methods for TV-based Poisson image restoration, in terms of both computational efficiency and image quality.Comment: 37 pages, 13 figure

    Limited-memory scaled gradient projection methods for real-time image deconvolution in microscopy

    Get PDF
    Gradient projection methods have given rise to effective tools for image deconvolution in several relevant areas, such as microscopy, medical imaging and astronomy. Due to the large scale of the optimization problems arising in nowadays imaging applications and to the growing request of real-time reconstructions, an interesting challenge to be faced consists in designing new acceleration techniques for the gradient schemes, able to preserve the simplicity and low computational cost of each iteration. In this work we propose an acceleration strategy for a state of the art scaled gradient projection method for image deconvolution in microscopy. The acceleration idea is derived by adapting a step-length selection rule, recently introduced for limited-memory steepest descent methods in unconstrained optimization, to the special constrained optimization framework arising in image reconstruction. We describe how important issues related to the generalization of the step-length rule to the imaging optimization problem have been faced and we evaluate the improvements due to the acceleration strategy by numerical experiments on large-scale image deconvolution problems

    Steplength selection in gradient projection methods for box-constrained quadratic programs

    Get PDF
    The role of the steplength selection strategies in gradient methods has been widely in- vestigated in the last decades. Starting from the work of Barzilai and Borwein (1988), many efficient steplength rules have been designed, that contributed to make the gradient approaches an effective tool for the large-scale optimization problems arising in important real-world applications. Most of these steplength rules have been thought in unconstrained optimization, with the aim of exploiting some second-order information for achieving a fast annihilation of the gradient of the objective function. However, these rules are successfully used also within gradient projection methods for constrained optimization, though, to our knowledge, a detailed analysis of the effects of the constraints on the steplength selections is still not available. In this work we investigate how the presence of the box constraints affects the spectral properties of the Barzilai\u2013Borwein rules in quadratic programming problems. The proposed analysis suggests the introduction of new steplength selection strategies specifically designed for taking account of the active constraints at each iteration. The results of a set of numerical experiments show the effectiveness of the new rules with respect to other state of the art steplength selections and their potential usefulness also in case of box-constrained non-quadratic optimization problems
    corecore