98 research outputs found

    Finite convergence of nonsmooth equation based methods for affine variational inequalities

    Get PDF
    AbstractThis note shows that several nonsmooth equation based methods proposed recently for affine variational inequalities converge finitely under some standard assumptions

    Globalizing a nonsmooth Newton method via nonmonotone path search

    Get PDF
    We give a framework for the globalization of a nonsmooth Newton method. In part one we start with recalling B. Kummer's approach to convergence analysis of a nonsmooth Newton method and state his results for local convergence. In part two we give a globalized version of this method. Our approach uses a path search idea to control the descent. After elaborating the single steps, we analyze and prove the global convergence resp. the local superlinear or quadratic convergence of the algorithm. In the third part we illustrate the method for nonlinear complementarity problem

    Generalized Newton's Method based on Graphical Derivatives

    Get PDF
    This paper concerns developing a numerical method of the Newton type to solve systems of nonlinear equations described by nonsmooth continuous functions. We propose and justify a new generalized Newton algorithm based on graphical derivatives, which have never been used to derive a Newton-type method for solving nonsmooth equations. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the algorithm, its local superlinear convergence, and its global convergence of the Kantorovich type. Our convergence results hold with no semismoothness assumption, which is illustrated by examples. The algorithm and main results obtained in the paper are compared with well-recognized semismooth and BB-differentiable versions of Newton's method for nonsmooth Lipschitzian equations

    Polyhedral Newton-min algorithms for complementarity problems

    Get PDF
    Abstract : The semismooth Newton method is a very eļ¬ƒcient approach for computing a zero of a large class of nonsmooth equations. When the initial iterate is suļ¬ƒciently close to a regular zero and the function is strongly semismooth, the generated sequence converges quadratically to that zero, while the iteration only requires to solve a linear system. If the ļ¬rst iterate is far away from a zero, however, it is diļ¬ƒcult to force its convergence using linesearch or trust regions because a semismooth Newton direction may not be a descent direction of the associated least-square merit function, unlike when the function is diļ¬€erentiable. We explore this question in the particular case of a nonsmooth equation reformulation of the nonlinear complementarity problem, using the minimum function. We propose a globally convergent algorithm using a modiļ¬cation of a semismooth Newton direction that makes it a descent direction of the least-square function. Instead of requiring that the direction satisļ¬es a linear system, it must be a feasible point of a convex polyhedron; hence, it can be computed in polynomial time. This polyhedron is deļ¬ned by the often very few inequalities, obtained by linearizing pairs of functions that have close negative values at the current iterate; hence, somehow, the algorithm feels the proximity of a ā€œnegative kinkā€ of the minimum function and acts accordingly. In order to avoid as often as possible the extra cost of having to ļ¬nd a feasible point of a polyhedron, a hybrid algorithm is also proposed, in which the Newton-min direction is accepted if a suļ¬ƒcient-descent-like criterion is satisļ¬ed, which is often the case in practice. Global convergence to regular points is proved

    Globally Convergent Coderivative-Based Generalized Newton Methods in Nonsmooth Optimization

    Full text link
    This paper proposes and justifies two globally convergent Newton-type methods to solve unconstrained and constrained problems of nonsmooth optimization by using tools of variational analysis and generalized differentiation. Both methods are coderivative-based and employ generalized Hessians (coderivatives of subgradient mappings) associated with objective functions, which are either of class C1,1\mathcal{C}^{1,1}, or are represented in the form of convex composite optimization, where one of the terms may be extended-real-valued. The proposed globally convergent algorithms are of two types. The first one extends the damped Newton method and requires positive-definiteness of the generalized Hessians for its well-posedness and efficient performance, while the other algorithm is of {the regularized Newton type} being well-defined when the generalized Hessians are merely positive-semidefinite. The obtained convergence rates for both methods are at least linear, but become superlinear under the semismoothāˆ—^* property of subgradient mappings. Problems of convex composite optimization are investigated with and without the strong convexity assumption {on smooth parts} of objective functions by implementing the machinery of forward-backward envelopes. Numerical experiments are conducted for Lasso problems and for box constrained quadratic programs with providing performance comparisons of the new algorithms and some other first-order and second-order methods that are highly recognized in nonsmooth optimization.Comment: arXiv admin note: text overlap with arXiv:2101.1055
    • ā€¦
    corecore