98 research outputs found

    Levenberg-Marquardt Method for the Eigenvalue Complementarity Problem

    Get PDF

    Local Convergence of Newton-type Methods for Nonsmooth Constrained Equations and Applications

    Get PDF
    In this thesis we consider constrained systems of equations. The focus is on local Newton-type methods for the solution of constrained systems which converge locally quadratically under mild assumptions implying neither local uniqueness of solutions nor differentiability of the equation function at solutions. The first aim of this thesis is to improve existing local convergence results of the constrained Levenberg-Marquardt method. To this end, we describe a general Newton-type algorithm. Then we prove local quadratic convergence of this general algorithm under the same four assumptions which were recently used for the local convergence analysis of the LP-Newton method. Afterwards, we show that, besides the LP-Newton method, the constrained Levenberg-Marquardt method can be regarded as a special realization of the general Newton-type algorithm and therefore enjoys the same local convergence properties. Thus, local quadratic convergence of a nonsmooth constrained Levenberg-Marquardt method is proved without requiring conditions implying the local uniqueness of solutions. As already mentioned, we use four assumptions for the local convergence analysis of the general Newton-type algorithm. The second aim of this thesis is a detailed discussion of these convergence assumptions for the case that the equation function of the constrained system is piecewise continuously differentiable. Some of the convergence assumptions seem quite technical and difficult to check. Therefore, we look for sufficient conditions which are still mild but which seem to be more familiar. We will particularly prove that the whole set of the convergence assumptions holds if some set of local error bound conditions is satisfied and in addition the feasible set of the constrained system excludes those zeros of the selection functions which are not zeros of the equation function itself, at least in a sufficiently small neighborhood of some fixed solution. We apply our results to constrained systems arising from complementarity systems, i.e., systems of equations and inequalities which contain complementarity constraints. Our new conditions are discussed for a suitable reformulation of the complementarity system as constrained system of equations by means of the minimum function. In particular, it will turn out that the whole set of the convergence assumptions is actually implied by some set of local error bound conditions. In addition, we provide a new constant rank condition implying the whole set of the convergence assumptions. Particularly, we provide adapted formulations of our new conditions for special classes of complementarity systems. We consider Karush-Kuhn-Tucker (KKT) systems arising from optimization problems, variational inequalities, or generalized Nash equilibrium problems (GNEPs) and Fritz-John (FJ) systems arising from GNEPs. Thus, we obtain for each problem class conditions which guarantee local quadratic convergence of the general Newton-type algorithm and its special realizations to a solution of the particular problem. Moreover, we prove for FJ systems of GNEPs that generically some full row rank condition is satisfied at any solution of the FJ system of a GNEP. The latter condition implies the whole set of the convergence assumptions if the functions which characterize the GNEP are sufficiently smooth. Finally, we describe an idea for a possible globalization of our Newton-type methods, at least for the case that the constrained system arises from a certain smooth reformulation of the KKT system of a GNEP. More precisely, a hybrid method is presented whose local part is the LP-Newton method. The hybrid method turns out to be, under appropriate conditions, both globally and locally quadratically convergent

    Convergence and Complexity Analysis of a Levenberg–Marquardt Algorithm for Inverse Problems

    Get PDF
    The Levenberg–Marquardt algorithm is one of the most popular algorithms for finding the solution of nonlinear least squares problems. Across different modified variations of the basic procedure, the algorithm enjoys global convergence, a competitive worst-case iteration complexity rate, and a guaranteed rate of local convergence for both zero and nonzero small residual problems, under suitable assumptions. We introduce a novel Levenberg-Marquardt method that matches, simultaneously, the state of the art in all of these convergence properties with a single seamless algorithm. Numerical experiments confirm the theoretical behavior of our proposed algorithm

    Nonsmooth Newton Methods for Solving the Best Approximation Problem; with Applications to Linear Programming

    Get PDF
    In this thesis, we study the effects of applying a modified Levenberg-Marquardt regularization to a nonsmooth Newton method. We expand this application to exact and inexact nonsmooth Newton methods and apply it to the best approximation constrained to a polyhedral set problem. We also demonstrate that linear programs can be represented as a best approximation problem, extending the application of nonsmooth Newton methods to linear programming. This application provides us with insight into an external path following algorithm that, like the simplex method, takes a finite number of steps on the boundary of the polyhedral set. However, unlike the simplex method, these steps do not use basic feasible solutions

    A globally convergent neurodynamics optimization model for mathematical programming with equilibrium constraints

    Get PDF
    summary:This paper introduces a neurodynamics optimization model to compute the solution of mathematical programming with equilibrium constraints (MPEC). A smoothing method based on NPC-function is used to obtain a relaxed optimization problem. The optimal solution of the global optimization problem is estimated using a new neurodynamic system, which, in finite time, is convergent with its equilibrium point. Compared to existing models, the proposed model has a simple structure, with low complexity. The new dynamical system is investigated theoretically, and it is proved that the steady state of the proposed neural network is asymptotic stable and global convergence to the optimal solution of MPEC. Numerical simulations of several examples of MPEC are presented, all of which confirm the agreement between the theoretical and numerical aspects of the problem and show the effectiveness of the proposed model. Moreover, an application to resource allocation problem shows that the new method is a simple, but efficient, and practical algorithm for the solution of real-world MPEC problems

    A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima

    Full text link
    We introduce Bella, a locally superlinearly convergent Bregman forward backward splitting method for minimizing the sum of two nonconvex functions, one of which satisfying a relative smoothness condition and the other one possibly nonsmooth. A key tool of our methodology is the Bregman forward-backward envelope (BFBE), an exact and continuous penalty function with favorable first- and second-order properties, and enjoying a nonlinear error bound when the objective function satisfies a Lojasiewicz-type property. The proposed algorithm is of linesearch type over the BFBE along candidate update directions, and converges subsequentially to stationary points, globally under a KL condition, and owing to the given nonlinear error bound can attain superlinear convergence rates even when the limit point is a nonisolated minimum, provided the directions are suitably selected
    • …
    corecore