6 research outputs found

    Adaptive Regularization Minimization Algorithms with Non-Smooth Norms and Euclidean Curvature

    Get PDF
    A regularization algorithm (AR1pGN) for unconstrained nonlinear minimization is considered, which uses a model consisting of a Taylor expansion of arbitrary degree and regularization term involving a possibly non-smooth norm. It is shown that the non-smoothness of the norm does not affect the O(Ï”1−(p+1)/p)O(\epsilon_1^{-(p+1)/p}) upper bound on evaluation complexity for finding first-order Ï”1\epsilon_1-approximate minimizers using pp derivatives, and that this result does not hinge on the equivalence of norms in ℜn\Re^n. It is also shown that, if p=2p=2, the bound of O(Ï”2−3)O(\epsilon_2^{-3}) evaluations for finding second-order Ï”2\epsilon_2-approximate minimizers still holds for a variant of AR1pGN named AR2GN, despite the possibly non-smooth nature of the regularization term. Moreover, the adaptation of the existing theory for handling the non-smoothness results in an interesting modification of the subproblem termination rules, leading to an even more compact complexity analysis. In particular, it is shown when the Newton's step is acceptable for an adaptive regularization method. The approximate minimization of quadratic polynomials regularized with non-smooth norms is then discussed, and a new approximate second-order necessary optimality condition is derived for this case. An specialized algorithm is then proposed to enforce the first- and second-order conditions that are strong enough to ensure the existence of a suitable step in AR1pGN (when p=2p=2) and in AR2GN, and its iteration complexity is analyzed.Comment: A correction will be available soo

    A Line-Search Algorithm Inspired by the Adaptive Cubic Regularization Framework and Complexity Analysis

    Get PDF
    Adaptive regularized framework using cubics has emerged as an alternative to line-search and trust-region algorithms for smooth nonconvex optimization, with an optimal complexity amongst second-order methods. In this paper, we propose and analyze the use of an iteration dependent scaled norm in the adaptive regularized framework using cubics. Within such scaled norm, the obtained method behaves as a line-search algorithm along the quasi- Newton direction with a special backtracking strategy. Under appropriate assumptions, the new algorithm enjoys the same convergence and complexity properties as adaptive regularized algorithm using cubics. The complexity for finding an approximate first-order stationary point can be improved to be optimal whenever a second order version of the proposed algorithm is regarded. In a similar way, using the same scaled norm to define the trust-region neighborhood, we show that the trust-region algorithm behaves as a line-search algorithm. The good potential of the obtained algorithms is shown on a set of large scale optimization problems

    On the use of the energy norm in trust-region and adaptive cubic regularization subproblems

    Get PDF
    International audienceWe consider solving unconstrained optimization problems by means of two popular globalization techniques: trust-region (TR) algorithms and adaptive regularized framework using cubics (ARC). Both techniques require the solution of a so-called ``subproblem'' in which a trial step is computed by solving an optimization problem involving an approximation of the objective function, called ``the model". The latter is supposed to be adequate in a neighborhood of the current iterate. In this paper, we address an important practical question related with the choice of the norm for defining the neighborhood. More precisely, assuming here that the Hessian BB of the model is symmetric positive definite, we propose the use of the so-called ``energy norm'' -- defined by ∄x∄B=xTBx\|x\|_B= \sqrt{x^TBx} for all x∈ℜnx \in \real^n -- in both TR and ARC techniques. We show that the use of this norm induces remarkable relations between the trial step of both methods that can be used to obtain efficient practical algorithms. We furthermore consider the use of truncated Krylov subspace methods to obtain an approximate trial step for large scale optimization. Within the energy norm, we obtain line search algorithms along the Newton direction, with a special backtracking strategy and an acceptability condition in the spirit of TR/ARC methods. The new line search algorithm, derived by ARC, enjoys a worst-case iteration complexity of O(ϔ−3/2)\mathcal{O}(\epsilon^{-3/2}). We show the good potential of the energy norm on a set of numerical experiments
    corecore