57 research outputs found
Descent directions of quasi-Newton methods for symmetric nonlinear equations
2002-2003 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe
Effective Modified Hybrid Conjugate Gradient Method for Large-Scale Symmetric Nonlinear Equations
In this paper, we proposed hybrid conjugate gradient method using the convex combination of FR and PRP conjugate gradient methods for solving Large-scale symmetric nonlinear equations via Andrei approach with nonmonotone line search. Logical formula for obtaining the convex parameter using Newton and our proposed directions was also proposed. Under appropriate conditions global convergence was established. Reported numerical results show that the proposed method is very promising
A Simple and Efficient Algorithm for Nonlinear Model Predictive Control
We present PANOC, a new algorithm for solving optimal control problems
arising in nonlinear model predictive control (NMPC). A usual approach to this
type of problems is sequential quadratic programming (SQP), which requires the
solution of a quadratic program at every iteration and, consequently, inner
iterative procedures. As a result, when the problem is ill-conditioned or the
prediction horizon is large, each outer iteration becomes computationally very
expensive. We propose a line-search algorithm that combines forward-backward
iterations (FB) and Newton-type steps over the recently introduced
forward-backward envelope (FBE), a continuous, real-valued, exact merit
function for the original problem. The curvature information of Newton-type
methods enables asymptotic superlinear rates under mild assumptions at the
limit point, and the proposed algorithm is based on very simple operations:
access to first-order information of the cost and dynamics and low-cost direct
linear algebra. No inner iterative procedure nor Hessian evaluation is
required, making our approach computationally simpler than SQP methods. The
low-memory requirements and simple implementation make our method particularly
suited for embedded NMPC applications
Global convergence of a new hybrid Gauss-Newton structured BFGS method for nonlinear least squares problems
2009-2010 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe
Matrix-Norm Approach of Computing Levenberg-Marquardt Reg- ularization Parameter for Nonlinear Equations
In this paper, we present Levenberg-Marquardt method for solving nonlinear systems of equations. Here, both the objective function and the symmetric Jacobian matrix are assumed to be Lipchitz continuous. The regularization parameter is derived using Matrix-Norm approach. Numerical performance on some benchmark problems that demonstrates the effectiveness and efficiency of our approach are reported and have shown that the proposed algorithm is very promising.Mathematics Subject Classification: 65H10, 65K05, 65F22, 65F35.keywords: Nonlinear system of equations. Levenberg-Marquardt method. Regularization. Matrix-norm. Global convergence
Scaling rank-one updating formula and its application in unconstrained optimization
This thesis deals with algorithms used to solve unconstrained optimization
problems. We analyse the properties of a scaling symmetric rank one (SSRl) update,
prove the convergence of the matrices generated by SSRl to the true Hessian matrix
and show that algorithm SSRl possesses the quadratic termination property with
inexact line search. A new algorithm (OCSSRl) is presented, in which the scaling
parameter in SSRl is choosen automatically by satisfying Davidon's criterion for an
optimaly conditioned Hessian estimate. Numerical tests show that the new method
compares favourably with BFGS. Using the OCSSRl update, we propose a hybrid QN
algorithm which does not need to store any matrix. Numerical results show that it is a
very promising method for solving large scale optimization problems. In addition, some
popular technologies in unconstrained optimization are also discussed, for example, the
trust region step, the descent direction with supermemory and. the detection of large
residual in nonlinear least squares problems.
The thesis consists of two parts. The first part gives a brief survey of
unconstrained optimization. It contains four chapters, and introduces basic results on
unconstrained optimization, some popular methods and their properties based on
quadratic approximations to the objective function, some methods which are suitable
for solving large scale optimization problems and some methods for solving nonlinear
least squares problems. The second part outlines the new research results, and containes five chapters, In Chapter 5, the scaling rank one updating formula is analysed and
studied. Chapter 6, Chapter 7 and Chapter 8 discuss the applications for the trust region method, large scale optimization problems and nonlinear least squares. A final chapter
summarizes the problems used in numerical testing
A trust region-type normal map-based semismooth Newton method for nonsmooth nonconvex composite optimization
We propose a novel trust region method for solving a class of nonsmooth and
nonconvex composite-type optimization problems. The approach embeds inexact
semismooth Newton steps for finding zeros of a normal map-based stationarity
measure for the problem in a trust region framework. Based on a new merit
function and acceptance mechanism, global convergence and transition to fast
local q-superlinear convergence are established under standard conditions. In
addition, we verify that the proposed trust region globalization is compatible
with the Kurdyka-{\L}ojasiewicz (KL) inequality yielding finer convergence
results. We further derive new normal map-based representations of the
associated second-order optimality conditions that have direct connections to
the local assumptions required for fast convergence. Finally, we study the
behavior of our algorithm when the Hessian matrix of the smooth part of the
objective function is approximated by BFGS updates. We successfully link the KL
theory, properties of the BFGS approximations, and a Dennis-Mor{\'e}-type
condition to show superlinear convergence of the quasi-Newton version of our
method. Numerical experiments on sparse logistic regression and image
compression illustrate the efficiency of the proposed algorithm.Comment: 56 page
Historical development of the BFGS secant method and its characterization properties
The BFGS secant method is the preferred secant method for finite-dimensional unconstrained optimization. The first part of this research consists of recounting the historical development of secant methods in general and the BFGS secant method in particular. Many people believe that the secant method arose from Newton's method using finite difference approximations to the derivative. We compile historical evidence revealing that a special case of the secant method predated Newton's method by more than 3000 years. We trace the evolution of secant methods from 18th-century B.C. Babylonian clay tablets and the Egyptian Rhind Papyrus. Modifications to Newton's method yielding secant methods are discussed and methods we believe influenced and led to the construction of the BFGS secant method are explored.
In the second part of our research, we examine the construction of several rank-two secant update classes that had not received much recognition in the literature. Our study of the underlying mathematical principles and characterizations inherent in the updates classes led to theorems and their proofs concerning secant updates. One class of symmetric rank-two updates that we investigate is the Dennis class. We demonstrate how it can be derived from the general rank-one update formula in a purely algebraic manner not utilizing Powell's method of iterated projections as Dennis did it. The literature abounds with update classes; we show how some are related and show containment when possible. We derive the general formula that could be used to represent all symmetric rank-two secant updates. From this, particular parameter choices yielding well-known updates and update classes are presented. We include two derivations of the Davidon class and prove that it is a maximal class. We detail known characterization properties of the BFGS secant method and describe new characterizations of several secant update classes known to contain the BFGS update. Included is a formal proof of the conjecture made by Schnabel in his 1977 Ph.D. thesis that the BFGS update is in some asymptotic sense the average of the DFP update and the Greenstadt update
- …