33,405 research outputs found

    Complexity-optimal and Parameter-free First-order Methods for Finding Stationary Points of Composite Optimization Problems

    Full text link
    This paper develops and analyzes an accelerated proximal descent method for finding stationary points of nonconvex composite optimization problems. The objective function is of the form f+hf+h where hh is a proper closed convex function, ff is a differentiable function on the domain of hh, and ∇f\nabla f is Lipschitz continuous on the domain of hh. The main advantage of this method is that it is "parameter-free" in the sense that it does not require knowledge of the Lipschitz constant of ∇f\nabla f or of any global topological properties of ff. It is shown that the proposed method can obtain an ε\varepsilon-approximate stationary point with iteration complexity bounds that are optimal, up to logarithmic terms over ε\varepsilon, in both the convex and nonconvex settings. Some discussion is also given about how the proposed method can be leveraged in other existing optimization frameworks, such as min-max smoothing and penalty frameworks for constrained programming, to create more specialized parameter-free methods. Finally, numerical experiments are presented to support the practical viability of the method

    A Descent Method for Equality and Inequality Constrained Multiobjective Optimization Problems

    Full text link
    In this article we propose a descent method for equality and inequality constrained multiobjective optimization problems (MOPs) which generalizes the steepest descent method for unconstrained MOPs by Fliege and Svaiter to constrained problems by using two active set strategies. Under some regularity assumptions on the problem, we show that accumulation points of our descent method satisfy a necessary condition for local Pareto optimality. Finally, we show the typical behavior of our method in a numerical example
    • …
    corecore