28,177 research outputs found

    Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema

    Full text link
    The asymptotic behavior of stochastic gradient algorithms is studied. Relying on results from differential geometry (Lojasiewicz gradient inequality), the single limit-point convergence of the algorithm iterates is demonstrated and relatively tight bounds on the convergence rate are derived. In sharp contrast to the existing asymptotic results, the new results presented here allow the objective function to have multiple and non-isolated minima. The new results also offer new insights into the asymptotic properties of several classes of recursive algorithms which are routinely used in engineering, statistics, machine learning and operations research

    Hybrid Newton-type method for a class of semismooth equations

    Get PDF
    In this paper, we present a hybrid method for the solution of a class of composite semismooth equations encountered frequently in applications. The method is obtained by combining a generalized finite-difference Newton method to an inexpensive direct search method. We prove that, under standard assumptions, the method is globally convergent with a local rate of convergence which is superlinear or quadratic. We report also several numerical results obtained applying the method to suitable reformulations of well-known nonlinear complementarity problem

    Nonautonomous stochastic search in global optimizatios

    Get PDF
    • …
    corecore