6 research outputs found

    ACQUIRE: an inexact iteratively reweighted norm approach for TV-based Poisson image restoration

    Full text link
    We propose a method, called ACQUIRE, for the solution of constrained optimization problems modeling the restoration of images corrupted by Poisson noise. The objective function is the sum of a generalized Kullback-Leibler divergence term and a TV regularizer, subject to nonnegativity and possibly other constraints, such as flux conservation. ACQUIRE is a line-search method that considers a smoothed version of TV, based on a Huber-like function, and computes the search directions by minimizing quadratic approximations of the problem, built by exploiting some second-order information. A classical second-order Taylor approximation is used for the Kullback-Leibler term and an iteratively reweighted norm approach for the smoothed TV term. We prove that the sequence generated by the method has a subsequence converging to a minimizer of the smoothed problem and any limit point is a minimizer. Furthermore, if the problem is strictly convex, the whole sequence is convergent. We note that convergence is achieved without requiring the exact minimization of the quadratic subproblems; low accuracy in this minimization can be used in practice, as shown by numerical results. Experiments on reference test problems show that our method is competitive with well-established methods for TV-based Poisson image restoration, in terms of both computational efficiency and image quality.Comment: 37 pages, 13 figure

    Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity

    Full text link
    We consider nonconvex constrained optimization problems and propose a new approach to the convergence analysis based on penalty functions. We make use of classical penalty functions in an unconventional way, in that penalty functions only enter in the theoretical analysis of convergence while the algorithm itself is penalty-free. Based on this idea, we are able to establish several new results, including the first general analysis for diminishing stepsize methods in nonconvex, constrained optimization, showing convergence to generalized stationary points, and a complexity study for SQP-type algorithms.Comment: To appear on Mathematics of Operations Researc

    Diminishing Stepsize Methods for Nonconvex Composite Problems via Ghost Penalties: from the General to the Convex Regular Constrained Case

    Full text link
    In this paper we first extend the diminishing stepsize method for nonconvex constrained problems presented in [4] to deal with equality constraints and a nonsmooth objective function of composite type. We then consider the particular case in which the constraints are convex and satisfy a standard constraint qualification and show that in this setting the algorithm can be considerably simplified, reducing the computational burden of each iteration.Comment: arXiv admin note: text overlap with arXiv:1709.0338

    Feasible methods for nonconvex nonsmooth problems with applications in green communications

    No full text
    We propose a general feasible method for nonsmooth, nonconvex constrained optimization problems. The algorithm is based on the (inexact) solution of a sequence of strongly convex optimization subproblems, followed by a step-size procedure. Key features of the scheme are: (i) it preserves feasibility of the iterates for nonconvex problems with nonconvex constraints, (ii) it can handle nonsmooth problems, and (iii) it naturally leads to parallel/distributed implementations. We illustrate the application of the method to an open problem in green communications whereby the energy consumption in MIMO multiuser interference networks is minimized, subject to nonconvex Quality-of-Service constraints
    corecore