50,499 research outputs found

    TREGO: a Trust-Region Framework for Efficient Global Optimization

    Full text link
    Efficient Global Optimization (EGO) is the canonical form of Bayesian optimization that has been successfully applied to solve global optimization of expensive-to-evaluate black-box problems. However, EGO struggles to scale with dimension, and offers limited theoretical guarantees. In this work, a trust-region framework for EGO (TREGO) is proposed and analyzed. TREGO alternates between regular EGO steps and local steps within a trust region. By following a classical scheme for the trust region (based on a sufficient decrease condition), the proposed algorithm enjoys global convergence properties, while departing from EGO only for a subset of optimization steps. Using extensive numerical experiments based on the well-known COCO {bound constrained problems}, we first analyze the sensitivity of TREGO to its own parameters, then show that the resulting algorithm is consistently outperforming EGO and getting competitive with other state-of-the-art black-box optimization methods

    A second-derivative trust-region SQP method with a "trust-region-free" predictor step

    Get PDF
    In (NAR 08/18 and 08/21, Oxford University Computing Laboratory, 2008) we introduced a second-derivative SQP method (S2QP) for solving nonlinear nonconvex optimization problems. We proved that the method is globally convergent and locally superlinearly convergent under standard assumptions. A critical component of the algorithm is the so-called predictor step, which is computed from a strictly convex quadratic program with a trust-region constraint. This step is essential for proving global convergence, but its propensity to identify the optimal active set is Paramount for recovering fast local convergence. Thus the global and local efficiency of the method is intimately coupled with the quality of the predictor step.\ud \ud In this paper we study the effects of removing the trust-region constraint from the computation of the predictor step; this is reasonable since the resulting problem is still strictly convex and thus well-defined. Although this is an interesting theoretical question, our motivation is based on practicality. Our preliminary numerical experience with S2QP indicates that the trust-region constraint occasionally degrades the quality of the predictor step and diminishes its ability to correctly identify the optimal active set. Moreover, removal of the trust-region constraint allows for re-use of the predictor step over a sequence of failed iterations thus reducing computation. We show that the modified algorithm remains globally convergent and preserves local superlinear convergence provided a nonmonotone strategy is incorporated

    Efficient Trust Region Methods for Nonconvex Optimization

    Get PDF
    For decades, a great deal of nonlinear optimization research has focused on modeling and solving convex problems. This has been due to the fact that convex objects typically represent satisfactory estimates of real-world phenomenon, and since convex objects have very nice mathematical properties that makes analyses of them relatively straightforward. However, this focus has been changing. In various important applications, such as large-scale data fitting and learning problems, researchers are starting to turn away from simple, convex models toward more challenging nonconvex models that better represent real-world behaviors and can offer more useful solutions.To contribute to this new focus on nonconvex optimization models, we discuss and present new techniques for solving nonconvex optimization problems that possess attractive theoretical and practical properties. First, we propose a trust region algorithm that, in the worst case, is able to drive the norm of the gradient of the objective function below a prescribed threshold of ϵ∈(0,∞)\epsilon \in (0,\infty) after at most O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) iterations, function evaluations, and derivative evaluations. This improves upon the O(ϵ−2)\mathcal{O}(\epsilon^{-2}) bound known to hold for some other trust region algorithms and matches the O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) bound for the recently proposed Adaptive Regularisation framework using Cubics, also known as the ARC algorithm. Our algorithm, entitled TRACE, follows a trust region framework, but employs modified step acceptance criteria and a novel trust region update mechanism that allow the algorithm to achieve such a worst-case global complexity bound. Importantly, we prove that our algorithm also attains global and fast local convergence guarantees under similar assumptions as for other trust region algorithms. We also prove a worst-case upper bound on the number of iterations the algorithm requires to obtain an approximate second-order stationary point.The aforementioned algorithm is based on techniques that require an exact subproblem solution in every iteration. This is a reasonable assumption for small- to medium-scale problems, but is intractable for large-scale optimization. To address this issue, the second project of this thesis involves a proposal of a general \emph{inexact} framework, which contains a wide range of algorithms with optimal complexity bounds, through defining a novel primal-dual subproblem and a set of loose conditions for an inexact solution of it. The proposed framework enjoys the same worst-case iteration complexity bounds for locating approximate first- and second-order stationary points as \RACE. However, it does not require one to solve subproblems exactly. In addition, the framework allows one to use inexact Newton steps whenever possible, a feature which allows the algorithm to use Hessian matrix-free approaches such as the \emph{conjugate gradient} method. This improves the practical performance of the algorithm, as our numerical experiments show.We close by proposing a globally convergent trust funnel algorithm for equality constrained optimization. The proposed algorithm, under some standard assumptions, is able to find a relative first-order stationary point after at most O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) iterations. This matches the complexity bound of the recently proposed Short-Step ARC algorithm. Our proposed algorithm uses the step decomposition and feasibility control mechanism of a trust funnel algorithm, but incorporates ideas from our TRACE framework in order to achieve good complexity bounds
    • …
    corecore