457 research outputs found

    A transformation method for constrained-function minimization

    Get PDF
    A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points

    The Davidon-Fletcher-Powell penalty function method: A generalized iterative technique for solving parameter optimization problems

    Get PDF
    The Fletcher-Powell version of the Davidon variable metric unconstrained minimization technique is described. Equations that have been used successfully with the Davidon-Fletcher-Powell penalty function technique for solving constrained minimization problems and the advantages and disadvantages of using them are discussed. The experience gained in the behavior of the method while iterating is also related

    An acceleration technique for a conjugate direction algorithm for nonlinear regression

    Get PDF
    A linear acceleration technique, LAT, is developed which is applied to three conjugate direction algorithms: (1) Fletcher-Reeves algorithm, (2) Davidon-Fletcher-Powell algorithm and (3) Grey\u27s Orthonormal Optimization Procedure (GOOP). Eight problems are solved by the three algorithms mentioned above and the Levenberg-Marquardt algorithm. The addition of the LAT algorithm improves the rate of convergence for the GOOP algorithm in all problems attempted and for some problems using the Fletcher-Reeves algorithm and the Davidon-Fletcher-Powell algorithm. Using the number of operations to perform function and derivative evaluations, the algorithms mentioned above are compared. Although the GOOP algorithm is relatively unknown outside of the optics literature, it was found to be competitive with the other successful algorithms. A proof of convergence of the accelerated GOOP algorithm for nonquadratic problems is also developed --Abstract, page ii

    Numerical Analysis

    Get PDF
    Acknowledgements: This article will appear in the forthcoming Princeton Companion to Mathematics, edited by Timothy Gowers with June Barrow-Green, to be published by Princeton University Press.\ud \ud In preparing this essay I have benefitted from the advice of many colleagues who corrected a number of errors of fact and emphasis. I have not always followed their advice, however, preferring as one friend put it, to "put my head above the parapet". So I must take full responsibility for errors and omissions here.\ud \ud With thanks to: Aurelio Arranz, Alexander Barnett, Carl de Boor, David Bindel, Jean-Marc Blanc, Mike Bochev, Folkmar Bornemann, Richard Brent, Martin Campbell-Kelly, Sam Clark, Tim Davis, Iain Duff, Stan Eisenstat, Don Estep, Janice Giudice, Gene Golub, Nick Gould, Tim Gowers, Anne Greenbaum, Leslie Greengard, Martin Gutknecht, Raphael Hauser, Des Higham, Nick Higham, Ilse Ipsen, Arieh Iserles, David Kincaid, Louis Komzsik, David Knezevic, Dirk Laurie, Randy LeVeque, Bill Morton, John C Nash, Michael Overton, Yoshio Oyanagi, Beresford Parlett, Linda Petzold, Bill Phillips, Mike Powell, Alex Prideaux, Siegfried Rump, Thomas Schmelzer, Thomas Sonar, Hans Stetter, Gil Strang, Endre Süli, Defeng Sun, Mike Sussman, Daniel Szyld, Garry Tee, Dmitry Vasilyev, Andy Wathen, Margaret Wright and Steve Wright

    An investigation of derivative-based methods for solving nonlinear problems with bounded variables

    Get PDF
    M.S.Mokhtar S. Bazara

    Second order gradient ascent pulse engineering

    Full text link
    We report some improvements to the gradient ascent pulse engineering (GRAPE) algorithm for optimal control of quantum systems. These include more accurate gradients, convergence acceleration using the BFGS quasi-Newton algorithm as well as faster control derivative calculation algorithms. In all test systems, the wall clock time and the convergence rates show a considerable improvement over the approximate gradient ascent.Comment: Submitted for publicatio

    Function-space quasi-Newton algorithms for optimal control problems with bounded controls and singular arcs

    Full text link
    Two existing function-space quasi-Newton algorithms, the Davidon algorithm and the projected gradient algorithm, are modified so that they may handle directly control-variable inequality constraints. A third quasi-Newton-type algorithm, developed by Broyden, is extended to optimal control problems. The Broyden algorithm is further modified so that it may handle directly control-variable inequality constraints. From a computational viewpoint, dyadic operator implementation of quasi-Newton methods is shown to be superior to the integral kernel representation. The quasi-Newton methods, along with the steepest descent method and two conjugate gradient algorithms, are simulated on three relatively simple (yet representative) bounded control problems, two of which possess singular subarcs. Overall, the Broyden algorithm was found to be superior. The most notable result of the simulations was the clear superiority of the Broyden and Davidon algorithms in producing a sharp singular control subarc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45211/1/10957_2004_Article_BF00933131.pd
    corecore