332 research outputs found

    How to project onto extended second order cones

    Get PDF
    The extended second order cones were introduced by S. Z. N\'emeth and G. Zhang in [S. Z. N\'emeth and G. Zhang. Extended Lorentz cones and variational inequalities on cylinders. J. Optim. Theory Appl., 168(3):756-768, 2016] for solving mixed complementarity problems and variational inequalities on cylinders. R. Sznajder in [R. Sznajder. The Lyapunov rank of extended second order cones. Journal of Global Optimization, 66(3):585-593, 2016] determined the automorphism groups and the Lyapunov or bilinearity ranks of these cones. S. Z. N\'emeth and G. Zhang in [S.Z. N\'emeth and G. Zhang. Positive operators of Extended Lorentz cones. arXiv:1608.07455v2, 2016] found both necessary conditions and sufficient conditions for a linear operator to be a positive operator of an extended second order cone. This note will give formulas for projecting onto the extended second order cones. In the most general case the formula will depend on a piecewise linear equation for one real variable which will be solved by using numerical methods

    On the resolution of the generalized nonlinear complementarity problem

    Get PDF
    Minimization of a differentiable function subject to box constraints is proposed as a strategy to solve the generalized nonlinear complementarity problem ( GNCP) defined on a polyhedral cone. It is not necessary to calculate projections that complicate and sometimes even disable the implementation of algorithms for solving these kinds of problems. Theoretical results that relate stationary points of the function that is minimized to the solutions of the GNCP are presented. Perturbations of the GNCP are also considered, and results are obtained related to the resolution of GNCPs with very general assumptions on the data. These theoretical results show that local methods for box-constrained optimization applied to the associated problem are efficient tools for solving the GNCP. Numerical experiments are presented that encourage the use of this approach.Minimization of a differentiable function subject to box constraints is proposed as a strategy to solve the generalized nonlinear complementarity problem ( GNCP) defined on a polyhedral cone. It is not necessary to calculate projections that complicate an122303321CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOFAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOsem informaçãosem informaçã

    Deflation for semismooth equations

    Full text link
    Variational inequalities can in general support distinct solutions. In this paper we study an algorithm for computing distinct solutions of a variational inequality, without varying the initial guess supplied to the solver. The central idea is the combination of a semismooth Newton method with a deflation operator that eliminates known solutions from consideration. Given one root of a semismooth residual, deflation constructs a new problem for which a semismooth Newton method will not converge to the known root, even from the same initial guess. This enables the discovery of other roots. We prove the effectiveness of the deflation technique under the same assumptions that guarantee locally superlinear convergence of a semismooth Newton method. We demonstrate its utility on various finite- and infinite-dimensional examples drawn from constrained optimization, game theory, economics and solid mechanics.Comment: 24 pages, 3 figure

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    Optimality conditions for abs-normal NLPs

    Get PDF
    Structured nonsmoothness is widely present in practical optimization problems. A particularly attractive class of nonsmooth problems, both from a theoretical and from an algorithmic perspective, are nonsmooth NLPs with equality and inequality constraints in abs-normal form, so-called abs-normal NLPs. In this thesis optimality conditions for this particular class are obtained. To this aim, first the theory for the case of unconstrained optimization problems in abs-normal form of Andreas Griewank and Andrea Walther is extended. In particular, similar necessary and sufficient conditions of first and second order are obtained that are directly based on classical Karush-Kuhn-Tucker (KKT) theory for smooth NLPs. Then, it is shown that the class of abs-normal NLPs is equivalent to the class of Mathematical Programs with Equilibrium Constraints (MPECs). Hence, the regularity assumption LIKQ introduced for the abs-normal NLP turns out to be equivalent to MPEC-LICQ. Moreover, stationarity concepts and optimality conditions under these regularity assumptions of linear independece type are equivalent up to technical assumptions. Next, well established constraint qualifications of Mangasarian Fromovitz, Abadie and Guignard type for MPECs are used to define corresponding concepts for abs-normal NLPs. Then, it is shown that kink qualifications and MPEC constraint qualifications of Mangasarian Fromovitz resp. Abadie type are equivalent. As it remains open if this holds for Guignard type kink and constraint qualifications, branch formulations for abs-normal NLPs and MPECs are introduced. Then, equivalence of Abadie’s and Guignard’s constraint qualifications for all branch problems hold. Throughout a reformulation of inequalities with absolute value slacks is considered. It preserves constraint qualifications of linear independence and Abadie type but not of Mangasarian Fromovitz type. For Guignard type it is still an open question but ACQ and GCQ are preserved passing over to branch problems. Further, M-stationarity and B-stationarity concepts for abs-normal NLPs are introduced and corresponding first order optimality con- ditions are proven using the corresponding concepts for MPECs. Moreover, a reformulation to extend the optimality conditions for abs-normal NLPs to those with additional nonsmooth objective functions is given and the preservation of regularity assumptions is considered. Using this, it is shown that the unconstrained abs-normal NLP always satisfies constraint qualifications of Abadie and thus Guignard type. Hence, in this special case every local minimizer satisfies the M-stationarity and B-stationarity concepts for abs-normal NLPs
    corecore