787 research outputs found

    SDPNAL+: A Matlab software for semidefinite programming with bound constraints (version 1.0)

    Full text link
    SDPNAL+ is a {\sc Matlab} software package that implements an augmented Lagrangian based method to solve large scale semidefinite programming problems with bound constraints. The implementation was initially based on a majorized semismooth Newton-CG augmented Lagrangian method, here we designed it within an inexact symmetric Gauss-Seidel based semi-proximal ADMM/ALM (alternating direction method of multipliers/augmented Lagrangian method) framework for the purpose of deriving simpler stopping conditions and closing the gap between the practical implementation of the algorithm and the theoretical algorithm. The basic code is written in {\sc Matlab}, but some subroutines in C language are incorporated via Mex files. We also design a convenient interface for users to input their SDP models into the solver. Numerous problems arising from combinatorial optimization and binary integer quadratic programming problems have been tested to evaluate the performance of the solver. Extensive numerical experiments conducted in [Yang, Sun, and Toh, Mathematical Programming Computation, 7 (2015), pp. 331--366] show that the proposed method is quite efficient and robust, in that it is able to solve 98.9\% of the 745 test instances of SDP problems arising from various applications to the accuracy of 106 10^{-6} in the relative KKT residual

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    Fast algorithms for large scale generalized distance weighted discrimination

    Full text link
    High dimension low sample size statistical analysis is important in a wide range of applications. In such situations, the highly appealing discrimination method, support vector machine, can be improved to alleviate data piling at the margin. This leads naturally to the development of distance weighted discrimination (DWD), which can be modeled as a second-order cone programming problem and solved by interior-point methods when the scale (in sample size and feature dimension) of the data is moderate. Here, we design a scalable and robust algorithm for solving large scale generalized DWD problems. Numerical experiments on real data sets from the UCI repository demonstrate that our algorithm is highly efficient in solving large scale problems, and sometimes even more efficient than the highly optimized LIBLINEAR and LIBSVM for solving the corresponding SVM problems

    Decomposition in conic optimization with partially separable structure

    Get PDF
    Decomposition techniques for linear programming are difficult to extend to conic optimization problems with general non-polyhedral convex cones because the conic inequalities introduce an additional nonlinear coupling between the variables. However in many applications the convex cones have a partially separable structure that allows them to be characterized in terms of simpler lower-dimensional cones. The most important example is sparse semidefinite programming with a chordal sparsity pattern. Here partial separability derives from the clique decomposition theorems that characterize positive semidefinite and positive-semidefinite-completable matrices with chordal sparsity patterns. The paper describes a decomposition method that exploits partial separability in conic linear optimization. The method is based on Spingarn's method for equality constrained convex optimization, combined with a fast interior-point method for evaluating proximal operators
    corecore