145 research outputs found

    Nonconvex optimization using negative curvature within a modified linesearch

    Get PDF
    This paper describes a new algorithm for the solution of nonconvex unconstrained optimization problems, with the property of converging to points satisfying second-order necessary optimality conditions. The algorithm is based on a procedure which, from two descent directions, a Newton-type direction and a direction of negative curvature, selects in each iteration the linesearch model best adapted to the properties of these directions. The paper also presents results of numerical experiments that illustrate its practical efficiency.Publicad

    Issues on the use of a modified Bunch and Kaufman decomposition for large scale Newton’s equation

    Get PDF
    In this work, we deal with Truncated Newton methods for solving large scale (possibly nonconvex) unconstrained optimization problems. In particular, we consider the use of a modified Bunch and Kaufman factorization for solving the Newton equation, at each (outer) iteration of the method. The Bunch and Kaufman factorization of a tridiagonal matrix is an effective and stable matrix decomposition, which is well exploited in the widely adopted SYMMBK [2, 5, 6, 19, 20] routine. It can be used to provide conjugate directions, both in the case of 1 × 1 and 2 × 2 pivoting steps. The main drawback is that the resulting solution of Newton’s equation might not be gradient–related, in the case the objective function is nonconvex. Here we first focus on some theoretical properties, in order to ensure that at each iteration of the Truncated Newton method, the search direction obtained by using an adapted Bunch and Kaufman factorization is gradient–related. This allows to perform a standard Armijo-type linesearch procedure, using a bounded descent direction. Furthermore, the results of an extended numerical experience using large scale CUTEst problems is reported, showing the reliability and the efficiency of the proposed approach, both on convex and nonconvex problems

    Combining and scaling descent and negative curvature directions

    Get PDF
    The original publication is available at www.springerlink.comThe aim of this paper is the study of different approaches to combine and scale, in an efficient manner, descent information for the solution of unconstrained optimization problems. We consider the situation in which different directions are available in a given iteration, and we wish to analyze how to combine these directions in order to provide a method more efficient and robust than the standard Newton approach. In particular, we will focus on the scaling process that should be carried out before combining the directions. We derive some theoretical results regarding the conditions necessary to ensure the convergence of combination procedures following schemes similar to our proposals. Finally, we conduct some computational experiments to compare these proposals with a modified Newton’s method and other procedures in the literature for the combination of information.Catarina P. Avelino was partially supported by Portuguese FCT postdoctoral grant SFRH/BPD/20453/2004 and by the Research Unit CM-UTAD of University of Trás-os-Montes e Alto Douro. Javier M. Moguerza and Alberto Olivares were partially supported by Spanish grant MEC MTM2006-14961-C05-05. Francisco J. Prieto was partially supported by grant MTM2007-63140 of the Spanish Ministry of Education.Publicad

    An augmented Lagrangian interior-point method using directions of negative curvature

    Get PDF
    The original publication is available at www.springerlink.comWe describe an efficient implementation of an interior-point algorithm for non-convex problems that uses directions of negative curvature. These directions should ensure convergence to second-order KKT points and improve the computational efficiency of the procedure. Some relevant aspects of the implementation are the strategy to combine a direction of negative curvature and a modified Newton direction, and the conditions to ensure feasibility of the iterates with respect to the simple bounds. The use of multivariate barrier and penalty parameters is also discussed, as well as the update rules for these parameters.We analyze the convergence of the procedure; both the linesearch and the update rule for the barrier parameter behave appropriately. As the main goal of the paper is the practical usage of negative curvature, a set of numerical results on small test problems is presented. Based on these results, the relevance of using directions of negative curvature is discussed.Research supported by Spanish MEC grant TIC2000-1750-C06-04; Research supported by Spanish MEC grant BEC2000-0167Publicad

    La méthode des résidus conjugués pour calculer les directions en optimisation continue

    Get PDF
    RÉSUMÉ : La méthode du gradient conjugué (CG) est une méthode proposée par Hestenes et Stiefel afin de résoudre des systèmes linéaires symétriques et définis positifs. En optimisation non linéaire sans contraintes, on y recourt de manière quasi-systématique pour le calcul des directions. Lorsque la matrice du système n’est pas définie positive, la variante en recherche linéaire proposée par Dembo et Steihaug, et celle de Steihaug en régions de confiance, permettent tout de même d’utiliser CG. La méthode des résidus conjugués (CR) est également proposée par Hestenes et Stiefel pour les cas où la matrice est définie positive. Elle partage avec CG la décroissance monotone du modèle quadratique, ce qui en fait un bon candidat pour les méthodes de région de confiance. De plus, les résidus dans CR décroissent de manière monotone, ce qui est intéressant, en particulier pour les méthodes de type Newton inexact, souvent utilisées en recherche linéaire. Dans cet ouvrage, nous proposons des variantes de CR pour les cas où la matrice n’est pas définie positive et étudions la performance de ces modifications dans des contextes de recherche linéaire et de région de confiance. Pour ce faire, nous comparons la performance de nos algorithmes aux variantes de CG correspondantes. Nous nous intéressons également à CRLS qui est l’équivalent de CR pour les moindres carrés linéaires, et suggérons une modification de cette méthode pour traiter le cas de la courbure nulle. Nos résultats montrent que CR est essentiellement équivalente à CG, et parfois meilleur, notamment pour résoudre les problèmes non convexes. CR présente aussi un léger avantage dans la résolution de problèmes convexes en recherche linéaire. Cette méthode permet souvent d’effectuer moins de produits hessien-vecteur que CG. La résolution des problèmes aux moindres carrés non linéaires montre une équivalence en termes de performance entre LSMR et LSQR qui sont les variantes, construites à partir du processus de Lanczos, de CRLS et CGLS pour résoudre l’équation normale. LSMR montre néanmoins un léger avantage en termes d’évaluation du résidu.----------ABSTRACT : The conjugate gradient method (CG) is a proven method for computing directions in unconstrained nonlinear optimization. This method, described by Hestenes and Stiefel, solves symmetric positive-definite linear systems. If the operator is not positive definite, the extension proposed by Dembo and Steihaug for linesearch and that of Steihaug for trust regions, make CG suitable again. The conjugate residual method (CR) was also proposed by Hestenes and Stiefel for positivedefinite operators. Like CG, CR makes the quadratic model decrease monotonically, which makes it relevant in a trust-region context. But CR also makes the residual decrease monotonically, a particularly interesting property for inexact Newton methods, often used in a linesearch context. In linesearch and trust-region contexts, we propose modifications of CR when the operator is not positive definite, and compare their performance with those of the corresponding extensions of CG. Our tests show that CR is, for the most part, equivalent to CG. It performs better than CG on nonconvex problems, and slightly better on convex problems in a linesearch context. The advantage is often in terms of operator-vector products. Finally, we consider the CRLS variant of CR for linear least-squares and investigate the case of zero curvature. We perform experiments with LSMR and LSQR, which are the versions of CRLS and CGLS built from the Lanczos process, to solve the normal equation. This reveals that LSMR performs as well as LSQR and enables slight savings in terms of residual evaluations

    Bridging the gap between Trust–Region Methods (TRMs) and Linesearch Based Methods (LBMs) for Nonlinear Programming: quadratic sub–problems

    Get PDF
    We consider the solution of a recurrent sub–problem within both constrained and unconstrained Nonlinear Programming: namely the minimization of a quadratic function subject to linear constraints. This problem appears in a number of LBM frameworks, and to some extent it reveals a close analogy with the solution of trust–region sub–problems. In particular, we refer to a structured quadratic problem where five linear inequality constraints are included. We show that our proposal retains an appreciable versatility, despite its particular structure, so that a number of different real instances may be reformulated following the pattern in our proposal. Moreover, we detail how to compute an exact global solution of our quadratic sub–problem, exploiting first order KKT conditions
    • …
    corecore