57 research outputs found

    Interior Point Methods for Massive Support Vector Machines

    Get PDF
    We investigate the use of interior point methods for solving quadratic programming problems with a small number of linear constraints where the quadratic term consists of a low-rank update to a positive semi-de nite matrix. Several formulations of the support vector machine t into this category. An interesting feature of these particular problems is the vol- ume of data, which can lead to quadratic programs with between 10 and 100 million variables and a dense Q matrix. We use OOQP, an object- oriented interior point code, to solve these problem because it allows us to easily tailor the required linear algebra to the application. Our linear algebra implementation uses a proximal point modi cation to the under- lying algorithm, and exploits the Sherman-Morrison-Woodbury formula and the Schur complement to facilitate e cient linear system solution. Since we target massive problems, the data is stored out-of-core and we overlap computation and I/O to reduce overhead. Results are reported for several linear support vector machine formulations demonstrating the reliability and scalability of the method

    Predictor-corrector interior-point algorithm for sufficient linear complementarity problems based on a new type of algebraic equivalent transformation technique

    Get PDF
    We propose a new predictor-corrector (PC) interior-point algorithm (IPA) for solving linear complementarity problem (LCP) with P_* (Îș)-matrices. The introduced IPA uses a new type of algebraic equivalent transformation (AET) on the centering equations of the system defining the central path. The new technique was introduced by Darvay et al. [21] for linear optimization. The search direction discussed in this paper can be derived from positive-asymptotic kernel function using the function φ(t)=t^2 in the new type of AET. We prove that the IPA has O(1+4Îș)√n log⁡〖(3nÎŒ^0)/Δ〗 iteration complexity, where Îș is an upper bound of the handicap of the input matrix. To the best of our knowledge, this is the first PC IPA for P_* (Îș)-LCPs which is based on this search direction

    On Polynomial-time Path-following Interior-point Methods with Local Superlinear Convergence

    Get PDF
    Interior-point methods provide one of the most popular ways of solving convex optimization problems. Two advantages of modern interior-point methods over other approaches are: (1) robust global convergence, and (2) the ability to obtain high accuracy solutions in theory (and in practice, if the algorithms are properly implemented, and as long as numerical linear system solvers continue to provide high accuracy solutions) for well-posed problem instances. This second ability is typically demonstrated by asymptotic superlinear convergence properties. In this thesis, we study superlinear convergence properties of interior-point methods with proven polynomial iteration complexity. Our focus is on linear programming and semidefinite programming special cases. We provide a survey on polynomial iteration complexity interior-point methods which also achieve asymptotic superlinear convergence. We analyze the elements of superlinear convergence proofs for a dual interior-point algorithm of Nesterov and Tun\c{c}el and a primal-dual interior-point algorithm of Mizuno, Todd and Ye. We present the results of our computational experiments which observe and track superlinear convergence for a variant of Nesterov and Tun\c{c}el's algorithm

    On the Nesterov-Todd Direction in Semidefinite Programming

    Full text link
    On the Nesterov-Todd Direction in Semidefinite Programmin
    • 

    corecore