94 research outputs found

    Deflation for semismooth equations

    Full text link
    Variational inequalities can in general support distinct solutions. In this paper we study an algorithm for computing distinct solutions of a variational inequality, without varying the initial guess supplied to the solver. The central idea is the combination of a semismooth Newton method with a deflation operator that eliminates known solutions from consideration. Given one root of a semismooth residual, deflation constructs a new problem for which a semismooth Newton method will not converge to the known root, even from the same initial guess. This enables the discovery of other roots. We prove the effectiveness of the deflation technique under the same assumptions that guarantee locally superlinear convergence of a semismooth Newton method. We demonstrate its utility on various finite- and infinite-dimensional examples drawn from constrained optimization, game theory, economics and solid mechanics.Comment: 24 pages, 3 figure

    An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming

    Get PDF
    Powerful interior-point methods (IPM) based commercial solvers, such as Gurobi and Mosek, have been hugely successful in solving large-scale linear programming (LP) problems. The high efficiency of these solvers depends critically on the sparsity of the problem data and advanced matrix factorization techniques. For a large scale LP problem with data matrix AA that is dense (possibly structured) or whose corresponding normal matrix AATAA^T has a dense Cholesky factor (even with re-ordering), these solvers may require excessive computational cost and/or extremely heavy memory usage in each interior-point iteration. Unfortunately, the natural remedy, i.e., the use of iterative methods based IPM solvers, although can avoid the explicit computation of the coefficient matrix and its factorization, is not practically viable due to the inherent extreme ill-conditioning of the large scale normal equation arising in each interior-point iteration. To provide a better alternative choice for solving large scale LPs with dense data or requiring expensive factorization of its normal equation, we propose a semismooth Newton based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can efficiently be used to solve simpler yet better conditioned semismooth Newton linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic superlinear convergence but is also proven to enjoy a finite termination property. Numerical comparisons with Gurobi have demonstrated encouraging potential of {\sc Snipal} for handling large-scale LP problems where the constraint matrix AA has a dense representation or AATAA^T has a dense factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920 characters", the abstract appearing here is slightly shorter than that in the PDF fil

    Using exact penalties to derive a new equation reformulation of KKT systems associated to variational inequalities

    Get PDF
    Abstract In this paper, we present a new reformulation of the KKT system associated to a variational inequality as a semismooth equation. The reformulation is derived from the concept of differentiable exact penalties for nonlinear programming. The best results are presented for nonlinear complementarity problems, where simple, verifiable, conditions ensure that the penalty is exact. We also develop a semismooth Newton method for complementarity problems based on the reformulation. We close the paper showing some preliminary computational tests comparing the proposed method with classical reformulations, based on the minimum or on the Fischer-Burmeister function

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    A primal-dual active set algorithm for three-dimensional contact problems with coulomb friction

    Get PDF
    International audienceIn this paper, efficient algorithms for contact problems with Tresca and Coulomb friction in three dimensions are presented and analyzed. The numerical approximation is based on mortar methods for nonconforming meshes with dual Lagrange multipliers. Using a nonsmooth com-plementarity function for the three-dimensional friction conditions, a primal-dual active set algorithm is derived. The method determines active contact and friction nodes and, at the same time, resolves the additional nonlinearity originating from sliding nodes. No regularization and no penalization are applied, and superlinear convergence can be observed locally. In combination with a multigrid method, it defines a robust and fast strategy for contact problems with Tresca or Coulomb friction. The efficiency and flexibility of the method is illustrated by several numerical examples

    A smoothing Newton method for the boundary-valued ODEs

    Get PDF
    Master'sMASTER OF SCIENC

    Reformulation semi-lisse appliquée au problème de complémentarité

    Get PDF
    Ce mémoire fait une revue des notions élémentaires concernant le problème de complé- mentarité. On y fait aussi un survol des principales méthodes connues pour le résoudre. Plus précisément, on s’intéresse à la méthode de Newton semi-lisse. Un article proposant une légère modification à cette méthode est présenté. Cette nouvelle méthode compétitive est démontrée convergente. Un second article traitant de la complexité itérative de la méthode de Harker et Pang est aussi introduit
    corecore