33,405 research outputs found
Complexity-optimal and Parameter-free First-order Methods for Finding Stationary Points of Composite Optimization Problems
This paper develops and analyzes an accelerated proximal descent method for
finding stationary points of nonconvex composite optimization problems. The
objective function is of the form where is a proper closed convex
function, is a differentiable function on the domain of , and
is Lipschitz continuous on the domain of . The main advantage of this method
is that it is "parameter-free" in the sense that it does not require knowledge
of the Lipschitz constant of or of any global topological properties
of . It is shown that the proposed method can obtain an
-approximate stationary point with iteration complexity bounds
that are optimal, up to logarithmic terms over , in both the
convex and nonconvex settings. Some discussion is also given about how the
proposed method can be leveraged in other existing optimization frameworks,
such as min-max smoothing and penalty frameworks for constrained programming,
to create more specialized parameter-free methods. Finally, numerical
experiments are presented to support the practical viability of the method
A Descent Method for Equality and Inequality Constrained Multiobjective Optimization Problems
In this article we propose a descent method for equality and inequality
constrained multiobjective optimization problems (MOPs) which generalizes the
steepest descent method for unconstrained MOPs by Fliege and Svaiter to
constrained problems by using two active set strategies. Under some regularity
assumptions on the problem, we show that accumulation points of our descent
method satisfy a necessary condition for local Pareto optimality. Finally, we
show the typical behavior of our method in a numerical example
- …