2 research outputs found
Scaling rank-one updating formula and its application in unconstrained optimization
This thesis deals with algorithms used to solve unconstrained optimization
problems. We analyse the properties of a scaling symmetric rank one (SSRl) update,
prove the convergence of the matrices generated by SSRl to the true Hessian matrix
and show that algorithm SSRl possesses the quadratic termination property with
inexact line search. A new algorithm (OCSSRl) is presented, in which the scaling
parameter in SSRl is choosen automatically by satisfying Davidon's criterion for an
optimaly conditioned Hessian estimate. Numerical tests show that the new method
compares favourably with BFGS. Using the OCSSRl update, we propose a hybrid QN
algorithm which does not need to store any matrix. Numerical results show that it is a
very promising method for solving large scale optimization problems. In addition, some
popular technologies in unconstrained optimization are also discussed, for example, the
trust region step, the descent direction with supermemory and. the detection of large
residual in nonlinear least squares problems.
The thesis consists of two parts. The first part gives a brief survey of
unconstrained optimization. It contains four chapters, and introduces basic results on
unconstrained optimization, some popular methods and their properties based on
quadratic approximations to the objective function, some methods which are suitable
for solving large scale optimization problems and some methods for solving nonlinear
least squares problems. The second part outlines the new research results, and containes five chapters, In Chapter 5, the scaling rank one updating formula is analysed and
studied. Chapter 6, Chapter 7 and Chapter 8 discuss the applications for the trust region method, large scale optimization problems and nonlinear least squares. A final chapter
summarizes the problems used in numerical testing
Optimal Conditioning in the Convex Class of Rank Two Updates
Davidson's new quasi-Newton optimization algorithm selects the new inverse Hessian approximation H at each step to be the "optimally conditioned" member of a certain one-parameter class of rank two updates to the last inverse Hessian approximation H. In this paper, we show that virtually the same goals of conditioning can be achieved while restricting H to the convex class of updates. We therefore suggest that Davidson's algorithms using optimal conditioning, restrict the choice of H to members of the convex class