46 research outputs found

    Global convergence of the gradient method for functions definable in o-minimal structures

    Full text link
    We consider the gradient method with variable step size for minimizing functions that are definable in o-minimal structures on the real field and differentiable with locally Lipschitz gradients. We prove that global convergence holds if continuous gradient trajectories are bounded, with the minimum gradient norm vanishing at the rate o(1/k)o(1/k) if the step sizes are greater than a positive constant. If additionally the gradient is continuously differentiable, all saddle points are strict, and the step sizes are constant, then convergence to a local minimum holds almost surely over any bounded set of initial points.Comment: 33 pages, 1 figur

    Lyapunov stability of the subgradient method with constant step size

    Full text link
    We consider the subgradient method with constant step size for minimizing locally Lipschitz semi-algebraic functions. In order to analyze the behavior of its iterates in the vicinity of a local minimum, we introduce a notion of discrete Lyapunov stability and propose necessary and sufficient conditions for stability.Comment: 11 pages, 2 figure

    Certifying the absence of spurious local minima at infinity

    Full text link
    When searching for global optima of nonconvex unconstrained optimization problems, it is desirable that every local minimum be a global minimum. This property of having no spurious local minima is true in various problems of interest nowadays, including principal component analysis, matrix sensing, and linear neural networks. However, since these problems are non-coercive, they may yet have spurious local minima at infinity. The classical tools used to analyze the optimization landscape, namely the gradient and the Hessian, are incapable of detecting spurious local minima at infinity. In this paper, we identify conditions that certify the absence of spurious local minima at infinity, one of which is having bounded subgradient trajectories. We check that they hold in several applications of interest.Comment: 31 pages, 4 figure

    Nonsmooth rank-one matrix factorization landscape

    Full text link
    We provide the first positive result on the nonsmooth optimization landscape of robust principal component analysis, to the best of our knowledge. It is the object of several conjectures and remains mostly uncharted territory. We identify a necessary and sufficient condition for the absence of spurious local minima in the rank-one case. Our proof exploits the subdifferential regularity of the objective function in order to eliminate the existence quantifier from the first-order optimality condition known as Fermat's rule.Comment: 23 pages, 5 figure
    corecore