7 research outputs found

    Nonlinear conjugate gradient method for vector optimization on Riemannian manifolds with retraction and vector transport

    Full text link
    In this paper, we propose nonlinear conjugate gradient methods for vector optimization on Riemannian manifolds. The concepts of Wolfe and Zoutendjik conditions are extended for Riemannian manifolds. Specifically, we establish the existence of intervals of step sizes that satisfy the Wolfe conditions. The convergence analysis covers the vector extensions of the Fletcher--Reeves, conjugate descent, and Dai--Yuan parameters. Under some assumptions, we prove that the sequence obtained by the algorithm can converge to a Pareto stationary point. Moreover, we also discuss several other choices of the parameter. Numerical experiments illustrating the practical behavior of the methods are presented

    An accelerated proximal gradient method for multiobjective optimization

    Full text link
    This paper presents an accelerated proximal gradient method for multiobjective optimization, in which each objective function is the sum of a continuously differentiable, convex function and a closed, proper, convex function. Extending first-order methods for multiobjective problems without scalarization has been widely studied, but providing accelerated methods with accurate proofs of convergence rates remains an open problem. Our proposed method is a multiobjective generalization of the accelerated proximal gradient method, also known as the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA), for scalar optimization. The key to this successful extension is solving a subproblem with terms exclusive to the multiobjective case. This approach allows us to demonstrate the global convergence rate of the proposed method (O(1/k2)O(1 / k^2)), using a merit function to measure the complexity. Furthermore, we present an efficient way to solve the subproblem via its dual representation, and we confirm the validity of the proposed method through some numerical experiments

    Teoría y métodos para problemas de optimización multiobjetivo

    Get PDF
    En esta tesis estudiamos la posibilidad de extender el método Lagrangiano Aumentado clásico de optimización escalar, para resolver problemas con objetivos múltiples. El método Lagrangiano Aumentado es una técnica popular para resolver problemas de optimización con restricciones. Consideramos dos posibles extensiones: - mediate el uso de escalarizaciones. Basados en el trabajo consideramos el uso de funciones débilmente crecientes para analizar la convergencia global de un método Lagrangiano Aumentado para resolver el problema multiobjetivo con restricciones de igualdad y de desigualdad. - mediante el uso de una función Lagrangiana Aumentada vectorial. En este caso el subproblema en el método Lagrangiano Aumentado tiene la particularidad de ser vectorial y planetamos su resolución mediante el uso de un método del tipo gradiente proyectado no monótono. En las extensiones que presentamos en la tesis se analizan las hipótesis más débiles bajo las cuales es posible demostrar convergencia a un punto estacionario del problema multiobjetivo.Facultad de Ciencias Exacta

    Inexact projected gradient method for vector optimization

    No full text
    Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)In this work, we propose an inexact projected gradient-like method for solving smooth constrained vector optimization problems. In the unconstrained case, we retrieve the steepest descent method introduced by Graa Drummond and Svaiter. In the constrained setting, the method we present extends the exact one proposed by Graa Drummond and Iusem, since it admits relative errors on the search directions. At each iteration, a decrease of the objective value is obtained by means of an Armijo-like rule. The convergence results of this new method extend those obtained by Fukuda and Graa Drummond for the exact version. For partial orders induced by both pointed and nonpointed cones, under some reasonable hypotheses, global convergence to weakly efficient points of all sequences generated by the inexact projected gradient method is established for convex (respect to the ordering cone) objective functions. In the convergence analysis we also establish a connection between the so-called weighting method and the one we propose.543473493Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Fundacao de Amparo a Pesquisa do Estado de Rio de JaneiroConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)FAPESP [2010/20572-0]CNPq [480101/2008-6
    corecore