14 research outputs found

    Convergence and Complexity Analysis of a Levenberg–Marquardt Algorithm for Inverse Problems

    Get PDF
    The Levenberg–Marquardt algorithm is one of the most popular algorithms for finding the solution of nonlinear least squares problems. Across different modified variations of the basic procedure, the algorithm enjoys global convergence, a competitive worst-case iteration complexity rate, and a guaranteed rate of local convergence for both zero and nonzero small residual problems, under suitable assumptions. We introduce a novel Levenberg-Marquardt method that matches, simultaneously, the state of the art in all of these convergence properties with a single seamless algorithm. Numerical experiments confirm the theoretical behavior of our proposed algorithm

    Optimal Trajectories of a UAV Base Station Using Hamilton-Jacobi Equations

    Full text link
    We consider the problem of optimizing the trajectory of an Unmanned Aerial Vehicle (UAV). Assuming a traffic intensity map of users to be served, the UAV must travel from a given initial location to a final position within a given duration and serves the traffic on its way. The problem consists in finding the optimal trajectory that minimizes a certain cost depending on the velocity and on the amount of served traffic. We formulate the problem using the framework of Lagrangian mechanics. We derive closed-form formulas for the optimal trajectory when the traffic intensity is quadratic (single-phase) using Hamilton-Jacobi equations. When the traffic intensity is bi-phase, i.e. made of two quadratics, we provide necessary conditions of optimality that allow us to propose a gradient-based algorithm and a new algorithm based on the linear control properties of the quadratic model. These two solutions are of very low complexity because they rely on fast convergence numerical schemes and closed form formulas. These two approaches return a trajectory satisfying the necessary conditions of optimality. At last, we propose a data processing procedure based on a modified K-means algorithm to derive a bi-phase model and an optimal trajectory simulation from real traffic data.Comment: 30 pages, 10 figures, 2 tables. arXiv admin note: substantial text overlap with arXiv:1812.0875

    Nonmonotone line-search methods for convexly constrained multi-objective optimization problems

    Get PDF
    Orientador: Prof. Dr. Geovani Nunes GrapigliaDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Matemática. Defesa : Curitiba, 14/02/2022Inclui referênciasResumo: Nesta dissertação propomos uma classe de métodos de busca linear não-monótona para problemas de otimização multiobjetivo com restrições convexas. São obtidos limitantes de pior caso para o número de iterações que esses métodos precisam para gerar pontos Paretocríticos aproximados. A generalidade da abordagem proposta permite o desenvolvimento de novos métodos para otimização multiobjetivo.Abstract: In this work we propose a class of nonmonotone line-search methods for convexly constrained multiobjective optimization problems. Worst-case complexity bounds are obtained for the number of iterations that these methods need to generate an approximate Pareto critical point. The generality of our approach allows the development of new methods for multiobjective optimizatio

    Accelerated-gradient-based generalized Levenberg--Marquardt method with oracle complexity bound and local quadratic convergence

    Full text link
    Minimizing the sum of a convex function and a composite function appears in various fields. The generalized Levenberg--Marquardt (LM) method, also known as the prox-linear method, has been developed for such optimization problems. The method iteratively solves strongly convex subproblems with a damping term. This study proposes a new generalized LM method for solving the problem with a smooth composite function. The method enjoys three theoretical guarantees: iteration complexity bound, oracle complexity bound, and local convergence under a H\"olderian growth condition. The local convergence results include local quadratic convergence under the quadratic growth condition; this is the first to extend the classical result for least-squares problems to a general smooth composite function. In addition, this is the first LM method with both an oracle complexity bound and local quadratic convergence under standard assumptions. These results are achieved by carefully controlling the damping parameter and solving the subproblems by the accelerated proximal gradient method equipped with a particular termination condition. Experimental results show that the proposed method performs well in practice for several instances, including classification with a neural network and nonnegative matrix factorization
    corecore