2 research outputs found

    Accelerated Methods for α\alpha-Weakly-Quasi-Convex Problems

    Full text link
    Many problems encountered in training neural networks are non-convex. However, some of them satisfy conditions weaker than convexity, but which are still sufficient to guarantee the convergence of some first-order methods. In our work we show that some previously known first-order methods retain their convergence rates under these weaker conditions

    An Algorithm for Degenerate Nonlinear Programming with Rapid Local Convergence

    Full text link
    corecore