4,254 research outputs found

    Efficient Inexact Proximal Gradient Algorithm for Nonconvex Problems

    Full text link
    The proximal gradient algorithm has been popularly used for convex optimization. Recently, it has also been extended for nonconvex problems, and the current state-of-the-art is the nonmonotone accelerated proximal gradient algorithm. However, it typically requires two exact proximal steps in each iteration, and can be inefficient when the proximal step is expensive. In this paper, we propose an efficient proximal gradient algorithm that requires only one inexact (and thus less expensive) proximal step in each iteration. Convergence to a critical point %of the nonconvex problem is still guaranteed and has a O(1/k)O(1/k) convergence rate, which is the best rate for nonconvex problems with first-order methods. Experiments on a number of problems demonstrate that the proposed algorithm has comparable performance as the state-of-the-art, but is much faster

    Accelerated Inexact Composite Gradient Methods for Nonconvex Spectral Optimization Problems

    Full text link
    This paper presents two inexact composite gradient methods, one inner accelerated and another doubly accelerated, for solving a class of nonconvex spectral composite optimization problems. More specifically, the objective function for these problems is of the form f1+f2+hf_1 + f_2 + h where f1f_1 and f2f_2 are differentiable nonconvex matrix functions with Lipschitz continuous gradients, hh is a proper closed convex matrix function, and both f2f_2 and hh can be expressed as functions that operate on the singular values of their inputs. The methods essentially use an accelerated composite gradient method to solve a sequence of proximal subproblems involving the linear approximation of f1f_1 and the singular value functions underlying f2f_2 and hh. Unlike other composite gradient-based methods, the proposed methods take advantage of both the composite and spectral structure underlying the objective function in order to efficiently generate their solutions. Numerical experiments are presented to demonstrate the practicality of these methods on a set of real-world and randomly generated spectral optimization problems

    Gradient methods for minimizing composite objective function

    Get PDF
    In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two convex terms: one is smooth and given by a black-box oracle, and another is general but simple and its structure is known. Despite to the bad properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the good part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (converge as O (1/k)), and an accelerated multistep version with convergence rate O (1/k2), where k isthe iteration counter. For all methods, we suggest some efficient "line search" procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.local optimization, convex optimization, nonsmooth optimization, complexity theory, black-box model, optimal methods, structural optimization, l1- regularization
    • …
    corecore