16 research outputs found

    A Subsampling Line-Search Method with Second-Order Results

    Full text link
    In many contemporary optimization problems such as those arising in machine learning, it can be computationally challenging or even infeasible to evaluate an entire function or its derivatives. This motivates the use of stochastic algorithms that sample problem data, which can jeopardize the guarantees obtained through classical globalization techniques in optimization such as a trust region or a line search. Using subsampled function values is particularly challenging for the latter strategy, which relies upon multiple evaluations. On top of that all, there has been an increasing interest for nonconvex formulations of data-related problems, such as training deep learning models. For such instances, one aims at developing methods that converge to second-order stationary points quickly, i.e., escape saddle points efficiently. This is particularly delicate to ensure when one only accesses subsampled approximations of the objective and its derivatives. In this paper, we describe a stochastic algorithm based on negative curvature and Newton-type directions that are computed for a subsampling model of the objective. A line-search technique is used to enforce suitable decrease for this model, and for a sufficiently large sample, a similar amount of reduction holds for the true objective. By using probabilistic reasoning, we can then obtain worst-case complexity guarantees for our framework, leading us to discuss appropriate notions of stationarity in a subsampling context. Our analysis encompasses the deterministic regime, and allows us to identify sampling requirements for second-order line-search paradigms. As we illustrate through real data experiments, these worst-case estimates need not be satisfied for our method to be competitive with first-order strategies in practice

    On the Convergence Properties of a Stochastic Trust-Region Method with Inexact Restoration

    Get PDF
    We study the convergence properties of SIRTR, a stochastic inexact restoration trust-region method suited for the minimization of a finite sum of continuously differentiable functions. This method combines the trust-region methodology with random function and gradient estimates formed by subsampling. Unlike other existing schemes, it forces the decrease of a merit function by combining the function approximation with an infeasibility term, the latter of which measures the distance of the current sample size from its maximum value. In a previous work, the expected iteration complexity to satisfy an approximate first-order optimality condition was given. Here, we elaborate on the convergence analysis of SIRTR and prove its convergence in probability under suitable accuracy requirements on random function and gradient estimates. Furthermore, we report the numerical results obtained on some nonconvex classification test problems, discussing the impact of the probabilistic requirements on the selection of the sample sizes

    Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information

    Full text link
    We consider variants of trust-region and cubic regularization methods for non-convex optimization, in which the Hessian matrix is approximated. Under mild conditions on the inexact Hessian, and using approximate solution of the corresponding sub-problems, we provide iteration complexity to achieve ϵ \epsilon -approximate second-order optimality which have shown to be tight. Our Hessian approximation conditions constitute a major relaxation over the existing ones in the literature. Consequently, we are able to show that such mild conditions allow for the construction of the approximate Hessian through various random sampling methods. In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and cubic regularization methods.Comment: 32 page

    Stochastic Trust Region Methods with Trust Region Radius Depending on Probabilistic Models

    Full text link
    We present a stochastic trust-region model-based framework in which its radius is related to the probabilistic models. Especially, we propose a specific algorithm, termed STRME, in which the trust-region radius depends linearly on the latest model gradient. The complexity of STRME method in non-convex, convex and strongly convex settings has all been analyzed, which matches the existing algorithms based on probabilistic properties. In addition, several numerical experiments are carried out to reveal the benefits of the proposed methods compared to the existing stochastic trust-region methods and other relevant stochastic gradient methods

    Efficient hybrid algorithms to solve mixed discrete-continuous optimization problems: A comparative study

    Get PDF
    Purpose: – In real world cases, it is common to encounter mixed discrete-continuous problems where some or all of the variables may take only discrete values. To solve these non-linear optimization problems, it is very time-consuming in use of finite element methods. The purpose of this paper is to study the efficiency of the proposed hybrid algorithms for the mixed discrete-continuous optimization, and compares it with the performance of Genetic Algorithms (GA). Design/methodology/approach: – In this paper, the enhanced multipoint approximation method (MAM) is utilized to reduce the original nonlinear optimization problem to a sequence of approximations. Then, the Sequential Quadratic Programming (SQP) technique is applied to find the continuous solution. Following that, the implementation of discrete capability into the MAM is developed to solve the mixed discrete-continuous optimization problems. Findings: – The efficiency and rate of convergence of the developed hybrid algorithms outperforming GA are examined by six detailed case studies in the ten-bar planar truss problem and the superiority of the Hooke-Jeeves assisted MAM algorithm over the other two hybrid algorithms and GAs is concluded. Originality/value: – The authors propose three efficient hybrid algorithms: the rounding-off, the coordinate search, and the Hooke-Jeeves search assisted MAMs, to solve nonlinear mixed discrete-continuous optimization problems. Implementations include the development of new procedures for sampling discrete points, the modification of the trust region adaptation strategy, and strategies for solving mix optimization problems. To improve the efficiency and effectiveness of metamodel construction, regressors φ defined in this paper can have the form in common with the empirical formulation of the problems in many engineering subjects
    corecore