51 research outputs found

    A Semismooth Newton Method for Tensor Eigenvalue Complementarity Problem

    Full text link
    In this paper, we consider the tensor eigenvalue complementarity problem which is closely related to the optimality conditions for polynomial optimization, as well as a class of differential inclusions with nonconvex processes. By introducing an NCP-function, we reformulate the tensor eigenvalue complementarity problem as a system of nonlinear equations. We show that this function is strongly semismooth but not differentiable, in which case the classical smoothing methods cannot apply. Furthermore, we propose a damped semismooth Newton method for tensor eigenvalue complementarity problem. A new procedure to evaluate an element of the generalized Jocobian is given, which turns out to be an element of the B-subdifferential under mild assumptions. As a result, the convergence of the damped semismooth Newton method is guaranteed by existing results. The numerical experiments also show that our method is efficient and promising

    A New Inexact Non-Interior Continuation Algorithm for Second-Order Cone Programming

    Get PDF
    Second-order cone programming has received considerable attention in the past decades because of its wide range of applications. Non-interior continuation method is one of the most popular and efficient methods for solving second-order cone programming partially due to its superior numerical performances. In this paper, a new smoothing form of the well-known Fischer-Burmeister function is given. Based on the new smoothing function, an inexact non-interior continuation algorithm is proposed. Attractively, the new algorithm can start from an arbitrary point, and it solves only one system of linear equations inexactly and performs only one line search at each iteration. Moreover, under a mild assumption, the new algorithm has a globally linear and locally Q-quadratical convergence. Finally, some preliminary numerical results are reported which show the effectiveness of the presented algorithm

    A regularized smoothing Newton method for symmetric cone complementarity problems

    Get PDF
    This paper extends the regularized smoothing Newton method in vector complementarity problems to symmetric cone complementarity problems (SCCP), which includes the nonlinear complementarity problem, the second-order cone complementarity problem, and the semidefinite complementarity problem as special cases. In particular, we study strong semismoothness and Jacobian nonsingularity of the total natural residual function for SCCP. We also derive the uniform approximation property and the Jacobian consistency of the Chen–Mangasarian smoothing function of the natural residual. Based on these properties, global and quadratical convergence of the proposed algorithm is established

    The Jacobian Consistency of a One-Parametric Class of Smoothing Functions for SOCCP

    Get PDF
    Second-order cone (SOC) complementarity functions and their smoothing functions have been much studied in the solution of second-order cone complementarity problems (SOCCP). In this paper, we study the directional derivative and B-subdifferential of the one-parametric class of SOC complementarity functions, propose its smoothing function, and derive the computable formula for the Jacobian of the smoothing function. Based on these results, we prove the Jacobian consistency of the one-parametric class of smoothing functions, which will play an important role for achieving the rapid convergence of smoothing methods. Moreover, we estimate the distance between the subgradient of the one-parametric class of the SOC complementarity functions and the gradient of its smoothing function, which will help to adjust a parameter appropriately in smoothing methods

    Deflation for semismooth equations

    Full text link
    Variational inequalities can in general support distinct solutions. In this paper we study an algorithm for computing distinct solutions of a variational inequality, without varying the initial guess supplied to the solver. The central idea is the combination of a semismooth Newton method with a deflation operator that eliminates known solutions from consideration. Given one root of a semismooth residual, deflation constructs a new problem for which a semismooth Newton method will not converge to the known root, even from the same initial guess. This enables the discovery of other roots. We prove the effectiveness of the deflation technique under the same assumptions that guarantee locally superlinear convergence of a semismooth Newton method. We demonstrate its utility on various finite- and infinite-dimensional examples drawn from constrained optimization, game theory, economics and solid mechanics.Comment: 24 pages, 3 figure
    corecore