14 research outputs found

    Reformulation semi-lisse appliquée au problème de complémentarité

    Get PDF
    Ce mémoire fait une revue des notions élémentaires concernant le problème de complé- mentarité. On y fait aussi un survol des principales méthodes connues pour le résoudre. Plus précisément, on s’intéresse à la méthode de Newton semi-lisse. Un article proposant une légère modification à cette méthode est présenté. Cette nouvelle méthode compétitive est démontrée convergente. Un second article traitant de la complexité itérative de la méthode de Harker et Pang est aussi introduit

    Polyhedral Newton-min algorithms for complementarity problems

    Get PDF
    Abstract : The semismooth Newton method is a very efficient approach for computing a zero of a large class of nonsmooth equations. When the initial iterate is sufficiently close to a regular zero and the function is strongly semismooth, the generated sequence converges quadratically to that zero, while the iteration only requires to solve a linear system. If the first iterate is far away from a zero, however, it is difficult to force its convergence using linesearch or trust regions because a semismooth Newton direction may not be a descent direction of the associated least-square merit function, unlike when the function is differentiable. We explore this question in the particular case of a nonsmooth equation reformulation of the nonlinear complementarity problem, using the minimum function. We propose a globally convergent algorithm using a modification of a semismooth Newton direction that makes it a descent direction of the least-square function. Instead of requiring that the direction satisfies a linear system, it must be a feasible point of a convex polyhedron; hence, it can be computed in polynomial time. This polyhedron is defined by the often very few inequalities, obtained by linearizing pairs of functions that have close negative values at the current iterate; hence, somehow, the algorithm feels the proximity of a “negative kink” of the minimum function and acts accordingly. In order to avoid as often as possible the extra cost of having to find a feasible point of a polyhedron, a hybrid algorithm is also proposed, in which the Newton-min direction is accepted if a sufficient-descent-like criterion is satisfied, which is often the case in practice. Global convergence to regular points is proved

    Algorithmes de Newton-min polyédriques pour les problèmes de complémentarité

    Get PDF
    The semismooth Newton method is a very efficient approach for computing a zero of a large class of nonsmooth equations. When the initial iterate is sufficiently close to a regular zero and the function is strongly semismooth, the generated sequence converges quadratically to that zero, while the iteration only requires to solve a linear system.If the first iterate is far away from a zero, however, it is difficult to force its convergence using linesearch or trust regions because a semismooth Newton direction may not be a descent direction of the associated least-square merit function, unlike when the function is differentiable. We explore this question in the particular case of a nonsmooth equation reformulation of the nonlinear complementarity problem, using the minimum function. We propose a globally convergent algorithm using a modification of a semismooth Newton direction that makes it a descent direction of the least-square function. Instead of requiring that the direction satisfies a linear system, it must be a feasible point of a convex polyhedron; hence, it can be computed in polynomial time. This polyhedron is defined by the often very few inequalities, obtained by linearizing pairs of functions that have close negative values at the current iterate; hence, somehow, the algorithm feels the proximity of a "bad kink" of the minimum function and acts accordingly.In order to avoid as often as possible the extra cost of having to find a feasible point of a polyhedron, a hybrid algorithm is also proposed, in which the Newton-min direction is accepted if a sufficient-descent-like criterion is satisfied, which is often the case in practice. Global convergence to regular points is proved; the notion of regularity is associated with the algorithm and is analysed with care.L'algorithme de Newton semi-lisse est très efficace pour calculer un zéro d'une large classe d'équations non lisses. Lorsque le premier itéré est suffisamment proche d'un zéro régulier et si la fonction est fortement semi-lisse, la suite générée converge quadratiquement vers ce zéro, alors que l'itération ne requière que la résolution d'un système linéaire.Cependant, si le premier itéré est éloigné d'un zéro, il est difficile de forcer sa convergence par recherche linéaire ou régions de confiance, parce que la direction de Newton semi-lisse n'est pas nécessairement une direction de descente de la fonction de moindres-carrés associée, contrairement au cas où la fonction à annuler est différentiable. Nous explorons cette question dans le cas particulier d'une reformulation par équation non lisse du problème de complémentarité non linéaire, en utilisant la fonction minimum. Nous proposons un algorithme globalement convergent, utilisant une direction de Newton semi-lisse modifiée, qui est de descente pour la fonction de moindres-carrés. Au lieu de requérir la satisfaction d'un système linéaire, cette direction doit être intérieur à un polyèdre convexe, ce qui peut se calculer en temps polynomial. Ce polyèdre est défini par souvent très peu d'inégalités, obtenus en linéarisant des couples de fonctions qui ont des valeurs négatives proches à l'itéré courant; donc, d'une certaine manière, l'algorithme est capable d'estimer la proximité des "mauvais plis" de la fonction minimum et d'agir en conséquence.De manière à éviter au si souvent que possible le coût supplémentaire lié au calcul d'un point admissible de polyèdre, un algorithme hybride est également proposé, dans lequel la direction de Newton-min est acceptée si un critère de décroissance suffisante est vérifié, ce qui est souvent le cas en pratique. La convergence globale vers des points régulier est démontrée; la notion de régularité est associée à l'algorithme et est analysée avec soin

    Une caractérisation algorithmique de la P-matricité II: ajustements, raffinements et validation

    Get PDF
    International audienceThe paper "An algorithmic characterization of P-matricity" (SIAM Journal on Matrix Analysis and Applications, 34:3 (2013) 904–916, by the same authors as here) implicitly assumes that the iterates generated by the Newton-min algorithm for solving a linear complementarity problem of dimension n, which reads 0 ⩽ x ⊥ (M x + q) ⩾ 0, are uniquely determined by some index subsets of [[1, n]]. Even if this is satisfied for a subset of vectors q that is dense in R^n, this assumption is improper, in particular in the statements where the vector q is not subject to restrictions. The goal of the present contribution is to show that, despite this blunder, the main result of that paper is preserved. This one claims that a nondegenerate matrix M is a P-matrix if and only if the Newton-min algorithm does not cycle between two distinct points, whatever is q. The proof is not more complex, requiring only some adjustments, which are essential however.L'article "An algorithmic characterization of P-matricity" (SIAM Journal on Matrix Analysis and Applications, 34:3 (2013) 904–916, par les mêmes auteurs qu'ici) suppose implicitement que les itérés générés par l'algorithme de Newton-min pour résoudre le problème de complémentarité linéaire de dimension n, qui s'écrit 0 ⩽ x ⊥ (M x + q) ⩾ 0, sont déterminés de manière unique par des sous-ensembles d'indices de [[1, n]]. Même si cette hypothèse est vérifiée pour un sous-ensemble de vecteurs q qui est dense dans R^n, elle n'est pas appropriée, en particulier dans les énoncés où le vecteur q n'est pas soumis à des restrictions. Le but du la contribution présente est de montrer que, malgré cette bévue, le résultat principal de l'article est préservé. Celui-ci affirme qu'une matrice non dégénérée M est une P-matrice si, et seulement si, l'algorithme de Newton-min ne cycle pas entre deux points distincts, quel que soit q. La démonstration n'est pas plus complexe et ne requiert que quelques ajustements, qui sont cependant essentiels

    Merit functions: a bridge between optimization and equilibria

    Get PDF
    In the last decades, many problems involving equilibria, arising from engineering, physics and economics, have been formulated as variational mathematical models. In turn, these models can be reformulated as optimization problems through merit functions. This paper aims at reviewing the literature about merit functions for variational inequalities, quasi-variational inequalities and abstract equilibrium problems. Smoothness and convexity properties of merit functions and solution methods based on them will be presented

    Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities

    Full text link
    Variational inequalities are a broad and flexible class of problems that includes minimization, saddle point, fixed point problems as special cases. Therefore, variational inequalities are used in a variety of applications ranging from equilibrium search to adversarial learning. Today's realities with the increasing size of data and models demand parallel and distributed computing for real-world machine learning problems, most of which can be represented as variational inequalities. Meanwhile, most distributed approaches has a significant bottleneck - the cost of communications. The three main techniques to reduce both the total number of communication rounds and the cost of one such round are the use of similarity of local functions, compression of transmitted information and local updates. In this paper, we combine all these approaches. Such a triple synergy did not exist before for variational inequalities and saddle problems, nor even for minimization problems. The methods presented in this paper have the best theoretical guarantees of communication complexity and are significantly ahead of other methods for distributed variational inequalities. The theoretical results are confirmed by adversarial learning experiments on synthetic and real datasets.Comment: 19 pages, 2 algorithms, 1 tabl

    Morceaux Choisis en Optimisation Continue et sur les Systèmes non Lisses

    Get PDF
    MasterThis course starts with the presentation of the optimality conditions of an optimization problem described in a rather abstract manner, so that these can be useful for dealing with a large variety of problems. Next, the course describes and analyzes various advanced algorithms to solve optimization problems (nonsmooth methods, linearization methods, proximal and augmented Lagrangian methods, interior point methods) and shows how they can be used to solve a few classical optimization problems (linear optimization, convex quadratic optimization, semidefinite optimization (SDO), nonlinear optimization). Along the way, various tools from convex and nonsmooth analysis will be presented. Everything is conceptualized in finite dimension. The goal of the lectures is therefore to consolidate basic knowledge in optimization, on both theoretical and algorithmic aspects

    Efficient numerical methods for hierarchical dynamic optimization with application to cerebral palsy gait modeling

    Get PDF
    This thesis aims at developing efficient mathematical methods for solving hierarchical dynamic optimization problems. The main motivation is to model processes in nature, for which there is evidence to assume that they run optimally. We describe models of such processes by optimal control problems (called optimal control models (OCMs)). However, an OCM typically includes unknown parameters that cannot be derived entirely on a theoretical basis, which is in particular the case for the cost function. Therefore, we develop parameter estimation techniques to estimate the unknowns in an OCM from observation data of the process. Mathematically, this leads to a hierarchical dynamic optimization problem with a parameter estimation problem on the upper level and an optimal control problem on the lower level. We focus on multi-stage equality and inequality constrained optimal control problems based on nonlinear ordinary differential equations. The main goal of this thesis is to derive numerically efficient mathematical methods for solving hierarchical dynamic optimization problems, and to use these methods to estimate parameters in high-dimensional OCMs from real-world measurement data. We develop parameter-dependent OCMs for the gait of cerebral palsy patients and able-bodied subjects. The unknown parameters in the OCMs are then estimated from real-world motion capture data provided by the Heidelberg MotionLab of the Orthopedic University Clinic Heidelberg by using the mathematical methods developed within this work. The main novelties and contributions of this thesis to the field of hierarchical dynamic optimization are summarized herein. - We establish a novel mathematical method, a so-called direct all-at-once approach, for solving hierarchical dynamic optimization problems based on the direct multiple shooting method and first-order optimality conditions. - Furthermore, we propose an efficient numerical algorithm for large-scale hierarchical dynamic optimization problems, which fully exploits the structures inherited from both the hierarchical setting and the discretization. - Pontryagin's maximum principle is used to analyze solution properties of hierarchical dynamic optimization problems like second-order optimality conditions of the lower-level problem. - In addition, we propose and discuss alternative methods for hierarchical dynamic optimization that are based on derivative-free optimization and a bundle approach. These methods keep the hierarchical problem setting and do not reformulate the lower-level problem using first-order optimality conditions. - We establish a novel lifting method for regularizing mathematical programs with complementarity constraints, which is discussed and numerically investigated by means of a well-known collection of benchmark problems. - Proofs of regularity and convergence results for sequential quadratic programming methods applied to lifted mathematical programs with complementarity constraints are provided. - Efficient state-of-the-art implementations of all mathematical methods derived in this thesis, as well as a benchmark collection of hierarchical dynamic optimization problems are presented. - High-dimensional optimal control gait models for cerebral palsy patients and able-bodied subjects are developed. The mathematical methods derived in this thesis are used to estimate the unknown model parameters from real-world motion capture data provided by the Heidelberg MotionLab of the Orthopedic University Clinic Heidelberg. The theoretical and practical results presented in this thesis can be considered an initial motivating step towards answering open questions in current medical research in fields like treatment planning, classification of gaits, or the evaluation of surgeries by means of hierarchical dynamic optimization
    corecore