69 research outputs found

    A variable metric forward--backward method with extrapolation

    Full text link
    Forward-backward methods are a very useful tool for the minimization of a functional given by the sum of a differentiable term and a nondifferentiable one and their investigation has experienced several efforts from many researchers in the last decade. In this paper we focus on the convex case and, inspired by recent approaches for accelerating first-order iterative schemes, we develop a scaled inertial forward-backward algorithm which is based on a metric changing at each iteration and on a suitable extrapolation step. Unlike standard forward-backward methods with extrapolation, our scheme is able to handle functions whose domain is not the entire space. Both {an O(1/k2){\mathcal O}(1/k^2) convergence rate estimate on the objective function values and the convergence of the sequence of the iterates} are proved. Numerical experiments on several {test problems arising from image processing, compressed sensing and statistical inference} show the {effectiveness} of the proposed method in comparison to well performing {state-of-the-art} algorithms

    Solution of feasibility problems via non-smooth optimization

    Get PDF
    Ankara : The Department of Industrial Engineering and the Institute of Engineering and Sciences of Bilkent Univ., 1990.Thesis (Master's) -- Bilkent University, 1990.Includes bibliographical references leaves 33-34In this study we present a penalty function approach for linear feasibility problems. Our attempt is to find an eiL· coive algorithm based on an exterior method. Any given feasibility (for a set of linear inequalities) problem, is first transformed into an unconstrained minimization of a penalty function, and then the problem is reduced to minimizing a convex, non-smooth, quadratic function. Due to non-differentiability of the penalty function, the gradient type methods can not be applied directly, so a modified nonlinear programming technique will be used in order to overcome the difficulties of the break points. In this research we present a new algorithm for minimizing this non-smooth penalty function. By dropping the nonnegativity constraints and using conjugate gradient method we compute a maximum set of conjugate directions and then we perform line searches on these directions in order to minimize our penalty function. Whenever the optimality criteria is not satisfied and the improvements in all directions are not enough, we calculate the new set of conjugate directions by conjugate Gram Schmit process, but one of the directions is the element of sub differential at the present point.Ouveysi, IradjM.S

    On the convergence of a linesearch based proximal-gradient method for nonconvex optimization

    Get PDF
    We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Lojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications

    MM Algorithms for Minimizing Nonsmoothly Penalized Objective Functions

    Full text link
    In this paper, we propose a general class of algorithms for optimizing an extensive variety of nonsmoothly penalized objective functions that satisfy certain regularity conditions. The proposed framework utilizes the majorization-minimization (MM) algorithm as its core optimization engine. The resulting algorithms rely on iterated soft-thresholding, implemented componentwise, allowing for fast, stable updating that avoids the need for any high-dimensional matrix inversion. We establish a local convergence theory for this class of algorithms under weaker assumptions than previously considered in the statistical literature. We also demonstrate the exceptional effectiveness of new acceleration methods, originally proposed for the EM algorithm, in this class of problems. Simulation results and a microarray data example are provided to demonstrate the algorithm's capabilities and versatility.Comment: A revised version of this paper has been published in the Electronic Journal of Statistic
    • …
    corecore