125 research outputs found

    An Inexact Augmented Lagrangian Method for Second-order Cone Programming with Applications

    Full text link
    In this paper, we adopt the augmented Lagrangian method (ALM) to solve convex quadratic second-order cone programming problems (SOCPs). Fruitful results on the efficiency of the ALM have been established in the literature. Recently, it has been shown in [Cui, Sun, and Toh, {\em Math. Program.}, 178 (2019), pp. 381--415] that if the quadratic growth condition holds at an optimal solution for the dual problem, then the KKT residual converges to zero R-superlinearly when the ALM is applied to the primal problem. Moreover, Cui, Ding, and Zhao [{\em SIAM J. Optim.}, 27 (2017), pp. 2332-2355] provided sufficient conditions for the quadratic growth condition to hold under the metric subregularity and bounded linear regularity conditions for solving composite matrix optimization problems involving spectral functions. Here, we adopt these recent ideas to analyze the convergence properties of the ALM when applied to SOCPs. To the best of our knowledge, no similar work has been done for SOCPs so far. In our paper, we first provide sufficient conditions to ensure the quadratic growth condition for SOCPs. With these elegant theoretical guarantees, we then design an SOCP solver and apply it to solve various classes of SOCPs, such as minimal enclosing ball problems, classical trust-region subproblems, square-root Lasso problems, and DIMACS Challenge problems. Numerical results show that the proposed ALM based solver is efficient and robust compared to the existing highly developed solvers, such as Mosek and SDPT3.Comment: 25 pages, 0 figur

    Projection methods in conic optimization

    Get PDF
    There exist efficient algorithms to project a point onto the intersection of a convex cone and an affine subspace. Those conic projections are in turn the work-horse of a range of algorithms in conic optimization, having a variety of applications in science, finance and engineering. This chapter reviews some of these algorithms, emphasizing the so-called regularization algorithms for linear conic optimization, and applications in polynomial optimization. This is a presentation of the material of several recent research articles; we aim here at clarifying the ideas, presenting them in a general framework, and pointing out important techniques

    Development of a nonlinear equations solver with superlinear convergence at regular singularities

    Get PDF
    In dieser Arbeit präsentieren wir eine neue Art von Newton-Verfahren mit Liniensuche, basierend auf Interpolation im Bildbereich nach Wedin et al. [LW84]. Von dem resultierenden stabilisierten Newton-Algorithmus wird theoretisch und praktisch gezeigt, dass er effizient ist im Falle von nichtsingulären Lösungen. Darüber hinaus wird beobachtet, dass er eine superlineare Rate von Konvergenz bei einfachen Singularitäten erhält. Hingegen ist vom Newton-Verfahren ohne Liniensuche bekannt, dass es nur linear von fast allen Punkten in der Nähe einer singulären Lösung konvergiert. In Hinsicht auf Anwendungen auf Komplementaritätsprobleme betrachten wir auch Systeme, deren Jacobimatrix nicht differenzierbar sondern nur semismooth ist. Auch hier erreicht unser stabilisiertes und beschleunigtes Newton- Verfahren Superlinearität bei einfachen Singularitäten.In this thesis we present a new type of line-search for Newton’s method, based on range space interpolation as suggested by Wedin et al. [LW84]. The resulting stabilized Newton algorithm is theoretically and practically shown to be efficient in the case of nonsingular roots. Moreover it is observed that it maintains a superlinear rate of convergence at simple singularities. Whereas Newton’s method without line-search is known to converge only linearly from almost all points near the singular root. In view of applications to complementarity problems we also consider systems, whose Jacobian is not differentiable but only semismooth. Again, our stabilized and accelerated Newton’s method achieves superlinearity at simple singularities

    A Smoothing Newton-BICGStab Method for Least Squares Matrix Nuclear Norm Problems

    Get PDF
    Master'sMASTER OF SCIENC

    A trust region-type normal map-based semismooth Newton method for nonsmooth nonconvex composite optimization

    Full text link
    We propose a novel trust region method for solving a class of nonsmooth and nonconvex composite-type optimization problems. The approach embeds inexact semismooth Newton steps for finding zeros of a normal map-based stationarity measure for the problem in a trust region framework. Based on a new merit function and acceptance mechanism, global convergence and transition to fast local q-superlinear convergence are established under standard conditions. In addition, we verify that the proposed trust region globalization is compatible with the Kurdyka-{\L}ojasiewicz (KL) inequality yielding finer convergence results. We further derive new normal map-based representations of the associated second-order optimality conditions that have direct connections to the local assumptions required for fast convergence. Finally, we study the behavior of our algorithm when the Hessian matrix of the smooth part of the objective function is approximated by BFGS updates. We successfully link the KL theory, properties of the BFGS approximations, and a Dennis-Mor{\'e}-type condition to show superlinear convergence of the quasi-Newton version of our method. Numerical experiments on sparse logistic regression and image compression illustrate the efficiency of the proposed algorithm.Comment: 56 page

    A Generalized Newton Method for Subgradient Systems

    Full text link
    This paper proposes and develops a new Newton-type algorithm to solve subdifferential inclusions defined by subgradients of extended-real-valued prox-regular functions. The proposed algorithm is formulated in terms of the second-order subdifferential of such functions that enjoys extensive calculus rules and can be efficiently computed for broad classes of extended-real-valued functions. Based on this and on metric regularity and subregularity properties of subgradient mappings, we establish verifiable conditions ensuring well-posedness of the proposed algorithm and its local superlinear convergence. The obtained results are also new for the class of equations defined by continuously differentiable functions with Lipschitzian derivatives (C1,1\mathcal{C}^{1,1} functions), which is the underlying case of our consideration. The developed algorithm for prox-regular functions is formulated in terms of proximal mappings related to and reduces to Moreau envelopes. Besides numerous illustrative examples and comparison with known algorithms for C1,1\mathcal{C}^{1,1} functions and generalized equations, the paper presents applications of the proposed algorithm to the practically important class of Lasso problems arising in statistics and machine learning.Comment: 35 page
    corecore