65 research outputs found

    Some Unconstrained Optimization Methods

    Get PDF
    Although it is a very old theme, unconstrained optimization is an area which is always actual for many scientists. Today, the results of unconstrained optimization are applied in different branches of science, as well as generally in practice. Here, we present the line search techniques. Further, in this chapter we consider some unconstrained optimization methods. We try to present these methods but also to present some contemporary results in this area

    Modifications of Steepest Descent Method and Conjugate Gradient Method Against Noise for Ill-posed Linear Systems

    Get PDF
    It is well known that the numerical algorithms of the steepest descent method (SDM), and the conjugate gradient method (CGM) are effective for solving well-posed linear systems. However, they are vulnerable to noisy disturbance for solving ill-posed linear systems. We propose the modifications of SDM and CGM, namely the modified steepest descent method (MSDM), and the modified conjugate gradient method (MCGM). The starting point is an invariant manifold defined in terms of a minimum functional and a fictitious time-like variable; however, in the final stage we can derive a purely iterative algorithm including an acceleration parameter. Through the Hopf bifurcation, this parameter indeed plays a major role to switch the situation of slow convergence to a new situation that the functional is stepwisely decreased very fast. Several numerical examples are examined and compared with exact solutions, revealing that the new algorithms of MSDM and MCGM have good computational efficiency and accuracy, even for the highly ill-conditioned linear equations system with a large noise being imposed on the given data

    OPTool - Documentation v1.2

    Get PDF
    The OPTool package is an implementation of various state-of-the-art iterative optimization algorithms for differentiable cost functions along with algorithms to solve linear equations. Users can use the toolbox to solve optimization problems, although the code was written to researchers that want to compare their proposals with state-of-the-art implementation. New algorithms can be easily added and the software will be updated to have the most comprehensive list of solvers possible. It also comes with implemented functions to return optimal parameters for these algorithms based on a control-theoretical formulation of the algorithms

    The eigenstep method: A new iterative method for unconstrained quadratic optimization.

    Get PDF
    This thesis presents a new method for the unconstrained minimization of convex quadratic programming problems. The method is an iterative method that is a modification of the classical steepest descent method. The methods are the same in the choice of the negative gradient as the search direction, but differ in the choice of step size. The steepest descent method uses the optimal step size, and the proposed method uses the reciprocal of the eigenvalues of the Hessian matrix as step sizes. Thus, the proposed method is referred to as the eigenstep method. It will be shown that the eigenstep method has finite termination with the number of iterations required being equal to the dimension of the problem, that is, the number of variables. Numerical examples will be provided to illustrate the algorithm, and a comparison is made to other standard optimization methods, including the steepest descent method.Dept. of Mathematics and Statistics. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .B38. Source: Masters Abstracts International, Volume: 44-01, page: 0364. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Métodos de máximo declive para minimização quadrática

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro de Ciências Físicas e Matemáticas, Programa de Pós-Graduação em Matemática Pura e Aplicada, Florianópolis, 2015.Neste trabalho apresentamos uma descrição detalhada do método de máximo declive para problemas quadráticos com busca unidirecional exata (método de Cauchy). Esse método é globalmente convergente, porém é ineficiente, pois é lento e apresenta um comportamento oscilatório, convergindo para uma busca no espaço gerado pelos autovetores associados ao maior e ao menor autovalor da matriz Hessiana do problema quadrático. Analisamos o comportamento oscilatório do gradiente da função objetivo no caso quadrático, bem como da sequência de passos gerados pelo método de Cauchy. Apresentamos o método de Barzilai-Borwein que, experimentalmente, exibe um desempenho melhor do que o método de Cauchy, e, também, algumas variantes do método de Barzilai-Borwein. Analisamos o comportamento do gradiente causado pela escolha de outros tamanhos de passos no método de máximo declive, o que nos permitiu propor uma nova escolha para o tamanho de passo. Com isso, propomos alguns novos algoritmos Cauchy-short, alternated Cauchy-short e outros) que alternam o tamanho de passo entre passos de Cauchy e passos curtos. Adotamos, ainda, uma nova proposta que utiliza passos de tamanhos dados por raízes de um polinômio de Chebyshev de ordem adequada. Experimentalmente, os novos métodos apresentam um bom desempenho, superando inclusive o método de Barzilai-Borwein. Além do bom desempenho, os novos métodos têm a vantagem de gerar sequências monotonicamente decrescentes de valores da função objetivo.Abstract : In this thesis we show a detailed description of the steepest descent method for quadratic problems with exact line searches (Cauchy Method). Although this method is globally convergent, it is inefficient because it is slow and it shows an oscillatory behavior, converging to a search in the space spanned by the eigenvectors associated with the largest and the smallest eigenvalue of the Hessian matrix of the quadratic objective. We analyze the oscillatory behavior of the gradient of the objective function in the quadratic case as well as the sequence of steps generated by the Cauchy method. We describe the Barzilai-Borwein method, which experimentally shows a better performance than the Cauchy method, and also some of its variations. We analyzed the behavior of the gradients due to the choice of different step sizes in the steepest descent method, which allowed us to come up with a new choice for the step size. Thus, we introduce a few new algorithms (Cauchy-short, alternated Cauchy-short and others) which alternate the step sizes between Cauchy steps and short steps. We also describe a new strategy based on step sizes given by the roots of a Chebyshev polynomial with suitable order. Experimentally, the new algorithms show a good enough performance, even better than the Barzilai-Borwein method. Besides the good performance, the new methods have the advantage of generating monotonically decreasing objective function values

    Studying the rate of convergence of gradient optimisation algorithms via the theory of optimal experimental design

    Get PDF
    The most common class of methods for solving quadratic optimisation problems is the class of gradient algorithms, the most famous of which being the Steepest Descent algorithm. The development of a particular gradient algorithm, the Barzilai-Borwein algorithm, has sparked a lot of research in the area in recent years and many algorithms now exist which have faster rates of convergence than that possessed by the Steepest Descent algorithm. The technology to effectively analyse and compare the asymptotic rates of convergence of gradient algorithms is, however, limited and so it is somewhat unclear from literature as to which algorithms possess the faster rates of convergence. In this thesis methodology is developed to enable better analysis of the asymptotic rates of convergence of gradient algorithms applied to quadratic optimisation problems. This methodology stems from a link with the theory of optimal experimental design. It is established that gradient algorithms can be related to algorithms for constructing optimal experimental designs for linear regression models. Furthermore, the asymptotic rates of convergence of these gradient algorithms can be expressed through the asymptotic behaviour of multiplicative algorithms for constructing optimal experimental designs. The described connection to optimal experimental design has also been used to influence the creation of several new gradient algorithms which would not have otherwise been intuitively thought of. The asymptotic rates of convergence of these algorithms are studied extensively and insight is given as to how some gradient algorithms are able to converge faster than others. It is demonstrated that the worst rates are obtained when the corresponding multiplicative procedure for updating the designs converges to the optimal design. Simulations reveal that the asymptotic rates of convergence of some of these new algorithms compare favourably with those of existing gradient-type algorithms such as the Barzilai-Borwein algorithm.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    • …
    corecore