7 research outputs found
Convergence properties of the Barzilai and Borwein gradient method
In a recent paper, Barzilai and Borwein presented a new choice of steplength for the gradient method. Their choice does not guarantee descent in the objective function and greatly speeds up the convergence of the method. We derive an interesting relationship between any gradient method and the shifted power method. This relationship allows us to establish the convergence of the Barzilai and Borwein method when applied to the problem of minimizing any strictly convex quadratic function (Barzilai and Borwein considered only 2-dimensional problems). Our point of view also allows us to explain the remarkable improvement obtained by using this new choice of steplength.
For the two eigenvalues case we present some very interesting convergence rate results. We show that our Q and R-rate of convergence analysis is sharp and we compare it with the Barzilai and Borwein analysis.
We derive the preconditioned Barzilai and Borwein method and present preliminary numerical results indicating that it is an effective method, as compared to the preconditioned Conjugate Gradient method, for the numerical solution of some special symmetric positive definite linear systems that arise in the numerical solution of Partial Differential Equations
Pseudoinverse preconditioners and iterative methods for large dense linear least-squares problems
We address the issue of approximating the pseudoinverse of the
coefficient matrix for dynamically building preconditioning strategies for the numerical solution of large dense linear least-squares problems. The new preconditioning strategies are embedded into simple and well-known iterative schemes that avoid the use of the, usually ill-conditioned, normal equations. We analyze a scheme to approximate the pseudoinverse, based on Schulz iterative method, and also different iterative schemes, based on extensions of Richardson's method, and the conjugate gradient method, that are suitable for preconditioning strategies. We present preliminary numerical results to illustrate the advantages of the proposed schemes
An Adaptive Algorithm for Bound Constrained Quadratic Minimization
A general algorithm for minimizing a quadratic function with bounds on the variables is presented. The new algorithm can use different unconstrained minimization techniques on different faces. At every face, the minimization technique can be chosen according to he structure of the Hessian and the dimension of the face. The strategy for leaving the face is based on a simple scheme that exploits the properties of the "chopped gradient" introduced by Friedlander and Mart'inez in 1989. This strategy guarantees global convergence even in the presence of dual degeneracy, and finite identification in the nondegenerate case. A slight modification of the algorithm satisfies, in addition, an identification property in the case of dual degeneracy. Numerical experiments combining this new strategy with conjugate gradients, gradient with retards and direct solvers are presented. Key words. Quadratic programming, conjugate gradients, gradient with retards, active set methods, sparse Cholesky factor..