3,009 research outputs found

    Global rates of convergence for nonconvex optimization on manifolds

    Full text link
    We consider the minimization of a cost function ff on a manifold MM using Riemannian gradient descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality conditions within a tolerance ε\varepsilon. Specifically, we show that, under Lipschitz-type assumptions on the pullbacks of ff to the tangent spaces of MM, both of these algorithms produce points with Riemannian gradient smaller than ε\varepsilon in O(1/ε2)O(1/\varepsilon^2) iterations. Furthermore, RTR returns a point where also the Riemannian Hessian's least eigenvalue is larger than ε-\varepsilon in O(1/ε3)O(1/\varepsilon^3) iterations. There are no assumptions on initialization. The rates match their (sharp) unconstrained counterparts as a function of the accuracy ε\varepsilon (up to constants) and hence are sharp in that sense. These are the first deterministic results for global rates of convergence to approximate first- and second-order Karush-Kuhn-Tucker points on manifolds. They apply in particular for optimization constrained to compact submanifolds of Rn\mathbb{R}^n, under simpler assumptions.Comment: 33 pages, IMA Journal of Numerical Analysis, 201

    A New Conjugate Gradient Algorithm with Sufficient Descent Property for Unconstrained Optimization

    Get PDF
    A new nonlinear conjugate gradient formula, which satisfies the sufficient descent condition, for solving unconstrained optimization problem is proposed. The global convergence of the algorithm is established under weak Wolfe line search. Some numerical experiments show that this new WWPNPRP+ algorithm is competitive to the SWPPRP+ algorithm, the SWPHS+ algorithm, and the WWPDYHS+ algorithm

    A dai-liao hybrid hestenes-stiefel and fletcher-revees methods for unconstrained optimization

    Get PDF
    Some problems have no analytical solution or too difficult to solve by scientists, engineers, and mathematicians, so the development of numerical methods to obtain approximate solutions became necessary. Gradient methods are more efficient when the function to be minimized continuously in its first derivative. Therefore, this article presents a new hybrid Conjugate Gradient (CG) method to solve unconstrained optimization problems. The method requires the first-order derivatives but overcomes the steepest descent method’s shortcoming of slow convergence and needs not to save or compute the second-order derivatives needed by the Newton method. The CG update parameter is suggested from the Dai-Liao conjugacy condition as a convex combination of Hestenes-Stiefel and Fletcher-Revees algorithms by employing an optimal modulating choice parameterto avoid matrix storage. Numerical computation adopts an inexact line search to obtain the step-size that generates a decent property, showing that the algorithm is robust and efficient. The scheme converges globally under Wolfe line search, and it’s like is suitable in compressive sensing problems and M-tensor systems

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page

    A Globally and R-Linearly Convergent Hybrid HS and PRP Method and its Inexact Version with Applications

    No full text
    We present a hybrid HS- and PRP-type conjugate gradient method for smooth optimization that converges globally and R-linearly for general functions. We also introduce its inexact version for problems of this kind in which gradients or values of the functions are unknown or difficult to compute. Moreover, we apply the inexact method to solve a nonsmooth convex optimization problem by converting it into a one-time continuously differentiable function by the method of Moreau–Yosida regularization.Наведено гібридний HS та PRP метод спряженого аргументу, глобально та R-лінійно з6іжний для загальних Функцій. Також введено неточний метод для таких проблем, в яких градієнти або значення функцій невідомі або важко визначаються. Крім того, неточний метод застосовано до негладкої опуклої проблеми оптимізації, що перетворює її в однократно неперервно диференційовну функцію за допомогою регуляризації Моро-Йосіди
    corecore