356,933 research outputs found

    On the Global Convergence of Derivative Free Methods for Unconstrained Optimization.

    Get PDF
    In this paper, starting from the study of the common elements that some globally convergent direct search methods share, a general convergence theory is established for unconstrained minimization methods employing only function values. The introduced convergence conditions are useful for developing and analyzing new derivative-free algorithms with guaranteed global convergence. As examples, we describe three new algorithms which combine pattern and line search approaches

    Convergence and stability of line search methods for unconstrained optimization.

    Get PDF
    This paper explores the stability of general line search methods in the sense of Lyapunov, for minimizing a smooth nonlinear function. In particular we give sufficient conditions for a line search method to be globally asymptotical stable. Our analysis suggests that the proposed sufficient conditions for asymptotical stability is equivalent to the Zoutendijk-type conditions in conventional global convergence analysis

    Don't be so Monotone: Relaxing Stochastic Line Search in Over-Parameterized Models

    Full text link
    Recent works have shown that line search methods can speed up Stochastic Gradient Descent (SGD) and Adam in modern over-parameterized settings. However, existing line searches may take steps that are smaller than necessary since they require a monotone decrease of the (mini-)batch objective function. We explore nonmonotone line search methods to relax this condition and possibly accept larger step sizes. Despite the lack of a monotonic decrease, we prove the same fast rates of convergence as in the monotone case. Our experiments show that nonmonotone methods improve the speed of convergence and generalization properties of SGD/Adam even beyond the previous monotone line searches. We propose a POlyak NOnmonotone Stochastic (PoNoS) method, obtained by combining a nonmonotone line search with a Polyak initial step size. Furthermore, we develop a new resetting technique that in the majority of the iterations reduces the amount of backtracks to zero while still maintaining a large initial step size. To the best of our knowledge, a first runtime comparison shows that the epoch-wise advantage of line-search-based methods gets reflected in the overall computational time

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    A New Class of Non-Quest-Newton Methods and Their Global Convergence with Goldstein Line Search

    Get PDF
    In this paper, on the basis of the DFP method a class of non-quasi-Newton methods is presented. Under some condition the global convergence property of these methods with Goldstein line search on uniformly convex objective function is proved
    corecore