9 research outputs found

    Solving Unconstrained Optimization Problems by a New Conjugate Gradient Method with Sufficient Descent Property

    Get PDF
    There have been some conjugate gradient methods with strong convergence but numerical instability and conversely‎. Improving these methods is an interesting idea to produce new methods with both strong convergence and‎‏  â€Žnumerical stability‎. ‎In this paper‎, ‎a new hybrid conjugate gradient method is introduced based on the Fletcher ‎formula (CD) with strong convergence and the Liu and Storey formula (LS) with good numerical results‎. ‎New directions satisfy the sufficient descent property‎, ‎independent of line search‎. ‎Under some mild assumptions‎, ‎the global convergence of new hybrid method is proved‎. ‎Numerical results on unconstrained CUTEst test problems show that the new algorithm is ‎very robust and efficient‎

    On limited-memory quasi-Newton methods for minimizing a quadratic function

    Full text link
    The main focus in this paper is exact linesearch methods for minimizing a quadratic function whose Hessian is positive definite. We give two classes of limited-memory quasi-Newton Hessian approximations that generate search directions parallel to those of the method of preconditioned conjugate gradients, and hence give finite termination on quadratic optimization problems. The Hessian approximations are described by a novel compact representation which provides a dynamical framework. We also discuss possible extensions of these classes and show their behavior on randomly generated quadratic optimization problems. The methods behave numerically similar to L-BFGS. Inclusion of information from the first iteration in the limited-memory Hessian approximation and L-BFGS significantly reduces the effects of round-off errors on the considered problems. In addition, we give our compact representation of the Hessian approximations in the full Broyden class for the general unconstrained optimization problem. This representation consists of explicit matrices and gradients only as vector components

    A Globally Convergent Algorithm for the Run-to-Run Control of Systems with Sector Nonlinearities

    Get PDF
    Run-to-run control is a technique that exploits the repetitive nature of processes to iteratively adjust the inputs and drive the run-end outputs to their reference values. It can be used to control both static and finite-time dynamic systems. Although the run-end outputs of dynamic systems result from the integration of process dynamics during the run, the relationship between the input parameters p (fixed at the beginning of the run) and the run-end outputs z (available at the end of the run) can be seen as the static map z(p). Run-to-run control consists in computing the input parameters p∗ that lead to the reference values z_ref. Although a wide range of techniques have been reported, most of them do not guarantee global convergence, that is, convergence towards p∗ for all possible initial conditions. This paper presents a new algorithm that guarantees global convergence for the run-to-run control of both static and finite-time dynamic systems. Attention is restricted to sector nonlinearities, for which it is shown that a fixed gain update can lead to global convergence. Furthermore, since convergence can be very slow, it is proposed to take advantage of the mathematical similarity between run-to-run control and the solution of nonlinear equations, and combine the fixed-gain algorithm with a faster variable-gain Newton-type algorithm. Global convergence of this hybrid scheme is proven. The potential of this algorithm in the context of run-to-run optimization of dynamic systems is illustrated via the simulation of an industrial batch polymerization reactor

    Reduced-Hessian Quasi-Newton Methods for Unconstrained Optimization

    No full text

    A Limited-memory Multipoint Symmetric Secant Method For Bound Constrained Optimization

    No full text
    A new algorithm for solving smooth large-scale minimization problems with bound constraints is introduced. The way of dealing with active constraints is similar to the one used in some recently introduced quadratic solvers. A limited-memory multipoint symmetric secant method for approximating the Hessian is presented. Positive-definiteness of the Hessian approximation is not enforced. A combination of trust-region and conjugate-gradient approaches is used to explore a useful negative curvature information. Global convergence is proved for a general model algorithm. Results of numerical experiments are presented.1171-45170Andreani, R., Friedlander, A., MartĂ­nez, J.M., On the solution of finite-dimensional variational inequalities using smooth optimization with simple bounds (1997) Journal on Optimization Theory and Applications, 94, pp. 635-657Andreani, R., MartĂ­nez, J.M., On the solution of the extended linear complementarity problem (1998) Linear Algebra and Its Applications, 281, pp. 247-257Andreani, R., MartĂ­nez, J.M., On the reformulation of nonlinear complementarity problems using the Fischer Burmeister function (1999) Applied Mathematics Letters, 12, pp. 7-12Andreani, R., MartĂ­nez, J.M., Reformulation of variational inequalities on a simplex and the compactification of complementarity problems (2000) SIAM Journal on Optimization, 10, pp. 878-895Bielschowsky, R., Friedlander, A., Gomes, F.A.M., MartĂ­nez, J.M., Raydan, M., An adaptive algorithm for bound constrained quadratic minimization (1998) InvestigaciĂłn Operativa, 7, pp. 67-102Biryukov, A.G., On the difference-approximation approach to the solution of systems of nonlinear equations (1983) Soviet Mathematics. Doklady, 17, pp. 660-664Bjorck, A., (1996) Numerical Methods for Least-Squares Problems, , SIAM, PhiladelphiaBongartz, I., Conn, A.R., Gould, N.I.M., Toint, Ph.L., CUTE: Constrained and unconstrained testing environment (1995) ACM Transactions on Mathematical Software, 21, pp. 123-160Burdakov, O.P., Methods of secant type for systems of equations with symmetric Jacobian matrix (1983) Numerical Functional Analysis and Optimization, 6, pp. 183-195Burdakov, O.P., Stable versions of the secant method for solving systems of equations (1983) USSR Computational Mathematics and Mathematical Physics, 23, pp. 1-10Burdakov, O.P., On superlinear convergence of some stable variants of the secant method (1986) ZAMM, 66, pp. 615-622Burdakov, O.P., Stable symmetric secant methods with restarts (1991) Cybernetics, 27, pp. 390-396Byrd, R.H., Lu, P., Nocedal, J., Zhu, C., A limited memory algorithm for bound constrained minimization (1995) SIAM Journal on Scientific Computing, 16, pp. 1190-1208Byrd, R.H., Nocedal, J., Schnabel, R.B., Representations of quasi-Newton matrices and their use in limited memory methods (1994) Mathematical Programming, 63, pp. 129-156Calamai, P.H., MorĂĄ, J.J., Projected gradient methods for linearly constrained problems (1987) Mathematical Programming, 39, pp. 93-116Conn, A.R., Gould, N.I.M., Toint, Ph.L., Global convergence of a class of trust region algorithms for optimization with simple bounds (1988) SIAM Journal on Numerical Analysis, 25, pp. 433-460Conn, A.R., Gould, N.I.M., Toint, Ph.L., A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds (1991) SIAM Journal on Numerical Analysis, 28, pp. 545-572Conn, A.R., Gould, N.I.M., Toint, Ph.L., (1992) LANCELOT: A Fortran Package for Large-Scale Nonlinear Optimization (Release A), , Springer, BerlinConn, A.R., Gould, N.I.M., Toint, Ph.L., (2000) Trust-Region Methods, , SIAM-MPSDax, A., A modified Gram Schmidt algorithm with iterative orthogonalization and column pivoting (2000) Linear Algebra and Its Applications, 310, pp. 25-42Dembo, R.S., Eisenstat, S.C., Steihaug, T., Inexact Newton methods (1982) SIAM Journal on Numerical Analysis, 19, pp. 400-408Dennis, J.E., Schnabel, R.B., (1983) Numerical Methods for Unconstrained Optimization and Nonlinear Equations, , Prentice-Hall, New YorkDostĂĄl, Z., Box constrained quadratic programming with proportioning and projections (1997) SIAM Journal on Optimization, 7, pp. 871-887DostĂĄl, Z., Friedlander, A., Santos, S.A., Solution of contact problems using subroutine BOX-QUACAN (1997) InvestigaciĂłn Operativa, 7, pp. 13-22DostĂĄl, Z., Friedlander, A., Santos, S.A., Analysis of block structures by augmented Lagrangians with adaptive precision control (1997) Proceedings of GEOMECHANICS'96, pp. 175-180. , ed. Z. Rakowski (A.A. Balkema, Rotterdam)DostĂĄl, Z., Friedlander, A., Santos, S.A., Solution of coercive and semicoercive contact problems by FETI domain decomposition (1998) The Tenth International Conference on Domain Decomposition Methods, 218, pp. 82-93. , Boulder, CO, Contemporary Mathematics, eds. J. Mandel, C. Farhat and X. CaiDostĂĄl, Z., Friedlander, A., Santos, S.A., Augmented Lagrangians with adaptive precision control for quadratic programming with equality constraints (1999) Computational Optimization and Applications, 14, pp. 1-17DostĂĄl, Z., Friedlander, A., Santos, S.A., MalĂ­k, J., Analysis of semicoercive contact problems using symmetric BEM and augmented Lagrangians (1997) Engineering Analysis with Boundary Elements, 18, pp. 195-201DostĂĄl, Z., Gomes, F.A.M., Santos, S.A., Duality-based domain decomposition with natural coarse-space for variational inequalities (2000) Journal of Computational and Applied Mathematics, 126, pp. 397-415Facchinei, J., JĂșdice, J., Soares, J., An active set Newton algorithm for large scale nonlinear programs with box constraints (1998) SIAM Journal on Optimization, 8, pp. 158-186Friedlander, A., MartĂ­nez, J.M., On the numerical solution of bound constrained optimization problems (1989) RAIRO Operations Research, 23, pp. 319-341Friedlander, A., MartĂ­nez, J.M., New algorithms for maximization of concave functions with box constraints (1992) RAIRO Operations Research, 26, pp. 209-236Friedlander, A., MartĂ­nez, J.M., On the maximization of a concave quadratic function with box constraints (1994) SIAM Journal on Optimization, 4, pp. 177-192Friedlander, A., MartĂ­nez, J.M., Santos, S.A., A new trust region algorithm for bound constrained minimization (1994) Applied Mathematics and Optimization, 30, pp. 235-266Gill, P.E., Leonard, M.W., Reduced-Hessian quasi-Newton methods for unconstrained optimization (2001) SIAM Journal on Optimization, 12, pp. 209-237Golub, G.H., Van Loan, C.F., (1989) Matrix Computations, , Johns Hopkins University PressKrejić, N., MartĂ­nez, J.M., Mello, M.P., Pilotta, E.A., Validation of an augmented Lagrangian algorithm with a Gauss-Newton Hessian approximation using a set of hard-spheres problems (2000) Computational Optimization and Applications, 16, pp. 247-263Lin, C.J., MorĂ©, J.J., Newton's method for large bound-constrained optimization problems (1999) SIAM Journal on Optimization, 9, pp. 1100-1127MartĂ­nez, J.M., Three new algorithms based on the sequential secant method (1979) BIT, 19, pp. 236-243MartĂ­nez, J.M., Pilotta, E.A., Raydan, M., Spectral gradient method for linearly constrained optimization (1999) Technical Report 22/99, , IMECC, University of Campinas (UNICAMP), Campinas, BrazilMorĂ©, J.J., Thuente, D.J., Line search algorithms with guaranteed sufficient decrease (1994) ACM Transactions on Mathematical Software, 20, pp. 286-307Ortega, J.M., Rheinboldt, W., (1970) Iterative Solution of Nonlinear Equations in Several Variables, , Academic Press, New YorkRaydan, M., On the Barzilai and Borwein choice of steplength for the gradient method (1993) SIAM Journal on Optimization, 7, pp. 26-33Schnabel, R.B., Quasi-Newton methods using multiple secant equations (1983) Technical Report CU-CS-247-83, , Department of Computer Science, University of Colorado, Boulder, COSteihaug, T., The conjugate gradient method and trust regions in large scale optimization (1983) SIAM Journal on Numerical Analysis, 20, pp. 626-63
    corecore