18 research outputs found

    Exploiting damped techniques for nonlinear conjugate gradient methods

    Get PDF
    In this paper we propose the use of damped techniques within Nonlinear Conjugate Gradient (NCG) methods. Damped techniques were introduced by Powell and recently reproposed by Al-Baali and till now, only applied in the framework of quasi–Newton methods. We extend their use to NCG methods in large scale unconstrained optimization, aiming at possibly improving the efficiency and the robustness of the latter methods, especially when solving difficult problems. We consider both unpreconditioned and Preconditioned NCG (PNCG). In the latter case, we embed damped techniques within a class of preconditioners based on quasi–Newton updates. Our purpose is to possibly provide efficient preconditioners which approximate, in some sense, the inverse of the Hessian matrix, while still preserving information provided by the secant equation or some of its modifications. The results of an extensive numerical experience highlights that the proposed approach is quite promising

    Quasi-Newton-Based Preconditioning and Damped Quasi-Newton Schemes for Nonlinear Conjugate Gradient Methods

    Get PDF
    In this paper, we deal with matrix-free preconditioners for Nonlinear Conjugate Gradient (NCG) methods. In particular, we review proposals based on quasi-Newton updates, and either satisfying the secant equation or a secant-like equation at some of the previous iterates. Conditions are given proving that, in some sense, the proposed preconditioners also approximate the inverse of the Hessian matrix. In particular, the structure of the preconditioners depends both on low-rank updates along with some specific parameters. The low-rank updates are obtained as by-product of NCG iterations. Moreover, we consider the possibility to embed damped techniques within a class of preconditioners based on quasi-Newton updates. Damped methods have proved to be effective to enhance the performance of quasi-Newton updates, in those cases where the Wolfe linesearch conditions are hardly fulfilled. The purpose is to extend the idea behind damped methods also to improve NCG schemes, following a novel line of research in the literature. The results, which summarize an extended numerical experience using large-scale CUTEst problems, is reported, showing that these approaches can considerably improve the performance of NCG methods

    A Note on Using Partitioning Techniques for Solving Unconstrained Optimization Problems on Parallel Systems

    Get PDF
    We deal with the design of parallel algorithms by using variable partitioning techniques to solve nonlinear optimization problems. We propose an iterative solution method that is very efficient for separable functions, our scope being to discuss its performance for general functions. Experimental results on an illustrative example have suggested some useful modifications that, even though they improve the efficiency of our parallel method, leave some questions open for further investigation.

    Numerical Experience with Damped Quasi-Newton Optimization Methods when the Objective Function is Quadratic

    Get PDF
    A class of damped quasi-Newton methods for nonlinear optimization has recently been proposed by extending the damped-technique of Powell for the BFGS method to the Broyden family of quasi-Newton methods. It has been shown that this damped class has the global and superlinear convergence property that a restricted class of 'undamped' methods has for convex objective functions in unconstrained optimization. To test this result, we applied several members of the Broyden family and their corresponding damped methods to a simple quadratic function and observed several useful features of the damped-technique. These observations and other numerical experiences are described in this paper. The important role of the damped-technique is shown not only for enforcing the above convergence property, but also for improving the performance of efficient, inefficient and divergent undamped methods substantially (significantly in the latter case). Thus, some appropriate ways for employing the damped-technique are suggested

    Damped Techniques for the Limited Memory BFGS Method for Large-Scale Optimization

    No full text
    This paper is aimed to extend a certain damped technique, suitable for the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method, to the limited memory BFGS method in the case of the large-scale unconstrained optimization. It is shown that the proposed technique maintains the global convergence property on uniformly convex functions for the limited memory BFGS method. Some numerical results are described to illustrate the important role of the damped technique. Since this technique enforces safely the positive definiteness property of the BFGS update for any value of the steplength, we also consider only the first Wolfe–Powell condition on the steplength. Then, as for the backtracking framework, only one gradient evaluation is performed on each iteration. It is reported that the proposed damped methods work much better than the limited memory BFGS method in several cases

    3rd International Conference on Numerical Analysis and Optimization : Theory, Methods, Applications and Technology Transfer

    No full text
    Presenting the latest findings in the field of numerical analysis and optimization, this volume balances pure research with practical applications of the subject. Accompanied by detailed tables, figures, and examinations of useful software tools, this volume will equip the reader to perform detailed and layered analysis of complex datasets. Many real-world complex problems can be formulated as optimization tasks. Such problems can be characterized as large scale, unconstrained, constrained, non-convex, non-differentiable, and discontinuous, and therefore require adequate computational methods, algorithms, and software tools. These same tools are often employed by researchers working in current IT hot topics such as big data, optimization and other complex numerical algorithms on the cloud, devising special techniques for supercomputing systems. The list of topics covered include, but are not limited to: numerical analysis, numerical optimization, numerical linear algebra, numerical differential equations, optimal control, approximation theory, applied mathematics, algorithms and software developments, derivative free optimization methods and programming models. The volume also examines challenging applications to various types of computational optimization methods which usually occur in statistics, econometrics, finance, physics, medicine, biology, engineering and industrial sciences

    4th International Conference on Numerical Analysis and Optimization

    No full text
    This volume contains 13 selected keynote papers presented at the Fourth International Conference on Numerical Analysis and Optimization. Held every three years at Sultan Qaboos University in Muscat, Oman, this conference highlights novel and advanced applications of recent research in numerical analysis and optimization. Each peer-reviewed chapter featured in this book reports on developments in key fields, such as numerical analysis, numerical optimization, numerical linear algebra, numerical differential equations, optimal control, approximation theory, applied mathematics, derivative-free optimization methods, programming models, and challenging applications that frequently arise in statistics, econometrics, finance, physics, medicine, biology, engineering and industry. Any graduate student or researched wishing to know the latest research in the field will be interested in this volume. This book is dedicated to the late Professors Mike JD Powell and Roger Fletcher, who were the pioneers and leading figures in the mathematics of nonlinear optimization

    A Class of Approximate Inverse Preconditioners Based on Krylov-Subspace Methods for Large-Scale Nonconvex Optimization

    No full text
    We introduce a class of positive definite preconditioners for the solution of large symmetric indefinite linear systems or sequences of such systems, in optimization frameworks. The preconditioners are iteratively constructed by collecting information on a reduced eigenspace of the indefinite matrix by means of a Krylov-subspace solver. A spectral analysis of the preconditioned matrix shows the clustering of some eigenvalues and possibly the nonexpansion of its spectrum. Extensive numerical experimentation is carried out on standard difficult linear systems and by embedding the class of preconditioners within truncated Newton methods for large-scale unconstrained optimization (the issue of major interest). Although the Krylov-based method may provide modest information on matrix eigenspaces, the results obtained show that the proposed preconditioners lead to substantial improvements in terms of efficiency and robustness, particularly on very large nonconvex problems
    corecore