4,514 research outputs found

    Efficient approaches for escaping higher order saddle points in non-convex optimization

    Get PDF
    Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with local optima. In this paper we use higher order derivatives to escape these saddle points: we design the first efficient algorithm guaranteed to converge to a third order local optimum (while existing techniques are at most second order). We also show that it is NP-hard to extend this further to finding fourth order local optima

    Necessary conditions involving Lie brackets for impulsive optimal control problems

    Full text link
    We obtain higher order necessary conditions for a minimum of a Mayer optimal control problem connected with a nonlinear, control-affine system, where the controls range on an m-dimensional Euclidean space. Since the allowed velocities are unbounded and the absence of coercivity assumptions makes big speeds quite likely, minimizing sequences happen to converge toward "impulsive", namely discontinuous, trajectories. As is known, a distributional approach does not make sense in such a nonlinear setting, where instead a suitable embedding in the graph space is needed. We will illustrate how the chance of using impulse perturbations makes it possible to derive a Higher Order Maximum Principle which includes both the usual needle variations (in space-time) and conditions involving iterated Lie brackets. An example, where a third order necessary condition rules out the optimality of a given extremal, concludes the paper.Comment: Conference pape
    • …
    corecore