33 research outputs found

    Parallel projected variable metric algorithms for unconstrained optimization

    Get PDF
    The parallel variable metric optimization algorithms of Straeter (1973) and van Laarhoven (1985) are reviewed, and the possible drawbacks of the algorithms are noted. By including Davidon (1975) projections in the variable metric updating, researchers can generalize Straeter's algorithm to a family of parallel projected variable metric algorithms which do not suffer the above drawbacks and which retain quadratic termination. Finally researchers consider the numerical performance of one member of the family on several standard example problems and illustrate how the choice of the displacement vectors affects the performance of the algorithm

    Efficiency of unconstrained minimization techniques in nonlinear analysis

    Get PDF
    Unconstrained minimization algorithms have been critically evaluated for their effectiveness in solving structural problems involving geometric and material nonlinearities. The algorithms have been categorized as being zeroth, first, or second order depending upon the highest derivative of the function required by the algorithm. The sensitivity of these algorithms to the accuracy of derivatives clearly suggests using analytically derived gradients instead of finite difference approximations. The use of analytic gradients results in better control of the number of minimizations required for convergence to the exact solution

    An Adaptive Method for Minimizing a Sum of Squares of Nonlinear Functions

    Get PDF
    The Gauss-Newton and the Levenberg-Marquardt algorithms for solving nonlinear least squares problems, minimize F(x) = sum_i=1^m (f_i(x))^2 for x in R^n, are both based upon the premise that one term in the Hessian of F(x) dominates its other terms, and that the Hessian may be approximated by this dominant term J^T J, where J_ij = ( delta f_i / delta x_j ). We are motivated here by the need for an algorithm which works well when applied to problems for which this premise is substantially violated, and is yet able to take advantage of situations where the premise holds. We describe and justify a method for approximating the Hessian of F ( x ) which uses a convex combination of J^T J and a matrix obtained by making quasi-Newton updates. In order to evaluate the usefulness of this idea, we construct a nonlinear least squares algorithm which uses this Hessian approximation, and report test results obtained by applying it to a set of test problems. A merit of our approach is that it demonstrates how a single adaptive algorithm can be used to efficiently solve unconstrained nonlinear optimization problems (whose Hessians have no particular structure), small residual and large residual, nonlinear least squares problems. Our paper can also be looked upon as an investigation for one problem area, of the following more general question: how can one combine two different Hessian approximations (or model functions) which are simultaneously available? The technique suggested here may thus be more widely applicable and may be of use, for example, when minimizing functions which are only partly composed of sums of squares arising in penalty function methods

    Solving Systems of Non-Linear Equations by Broyden's Method with Projected Updates

    Get PDF
    We introduce a modification of Broyden's method for finding a zero of n nonlinear equations in n unknowns when analytic derivatives are not available. The method retains the local Q-superlinear convergence of Broyden's method and has the additional property that if any or all of the equations are linear, it locates a zero of these equations in n+1 or fewer iterations. Limited computational experience suggests that our modification often improves upon Eroyden's method.

    Historical development of the BFGS secant method and its characterization properties

    Get PDF
    The BFGS secant method is the preferred secant method for finite-dimensional unconstrained optimization. The first part of this research consists of recounting the historical development of secant methods in general and the BFGS secant method in particular. Many people believe that the secant method arose from Newton's method using finite difference approximations to the derivative. We compile historical evidence revealing that a special case of the secant method predated Newton's method by more than 3000 years. We trace the evolution of secant methods from 18th-century B.C. Babylonian clay tablets and the Egyptian Rhind Papyrus. Modifications to Newton's method yielding secant methods are discussed and methods we believe influenced and led to the construction of the BFGS secant method are explored. In the second part of our research, we examine the construction of several rank-two secant update classes that had not received much recognition in the literature. Our study of the underlying mathematical principles and characterizations inherent in the updates classes led to theorems and their proofs concerning secant updates. One class of symmetric rank-two updates that we investigate is the Dennis class. We demonstrate how it can be derived from the general rank-one update formula in a purely algebraic manner not utilizing Powell's method of iterated projections as Dennis did it. The literature abounds with update classes; we show how some are related and show containment when possible. We derive the general formula that could be used to represent all symmetric rank-two secant updates. From this, particular parameter choices yielding well-known updates and update classes are presented. We include two derivations of the Davidon class and prove that it is a maximal class. We detail known characterization properties of the BFGS secant method and describe new characterizations of several secant update classes known to contain the BFGS update. Included is a formal proof of the conjecture made by Schnabel in his 1977 Ph.D. thesis that the BFGS update is in some asymptotic sense the average of the DFP update and the Greenstadt update

    Scaling rank-one updating formula and its application in unconstrained optimization

    No full text
    This thesis deals with algorithms used to solve unconstrained optimization problems. We analyse the properties of a scaling symmetric rank one (SSRl) update, prove the convergence of the matrices generated by SSRl to the true Hessian matrix and show that algorithm SSRl possesses the quadratic termination property with inexact line search. A new algorithm (OCSSRl) is presented, in which the scaling parameter in SSRl is choosen automatically by satisfying Davidon's criterion for an optimaly conditioned Hessian estimate. Numerical tests show that the new method compares favourably with BFGS. Using the OCSSRl update, we propose a hybrid QN algorithm which does not need to store any matrix. Numerical results show that it is a very promising method for solving large scale optimization problems. In addition, some popular technologies in unconstrained optimization are also discussed, for example, the trust region step, the descent direction with supermemory and. the detection of large residual in nonlinear least squares problems. The thesis consists of two parts. The first part gives a brief survey of unconstrained optimization. It contains four chapters, and introduces basic results on unconstrained optimization, some popular methods and their properties based on quadratic approximations to the objective function, some methods which are suitable for solving large scale optimization problems and some methods for solving nonlinear least squares problems. The second part outlines the new research results, and containes five chapters, In Chapter 5, the scaling rank one updating formula is analysed and studied. Chapter 6, Chapter 7 and Chapter 8 discuss the applications for the trust region method, large scale optimization problems and nonlinear least squares. A final chapter summarizes the problems used in numerical testing

    Computational Experiments with Systems of Nonlinear Equations

    Get PDF
    Higher Educatio

    Research in orbit determination optimization for space trajectories

    Get PDF
    Research data covering orbit determination, optimization techniques, and trajectory design for manned space flights are summarized

    On the convergence of a class of variable metric algorithms

    Get PDF
    corecore