383 research outputs found

    An interior point method for solving semidefinite programs using cutting planes and weighted analytic centers

    Get PDF
    We investigate solving semidefinite programs (SDPs) with an interior point method called SDP-CUT, which utilizes weighted analytic centers and cutting plane constraints. SDP-CUT iteratively refines the feasible region to achieve the optimal solution. The algorithm uses Newton’s method to compute the weighted analytic center. We investigate different stepsize determining techniques. We found that using Newton's method with exact line search is generally the best implementation of the algorithm. We have also compared our algorithm to the SDPT3 method and found that SDP-CUT initially gets into the neighborhood of the optimal solution in less iterations on all our test problems. SDP-CUT also took less iterations to reach optimality on many of the problems. However, SDPT3 required less iterations on most of the test problems and less time on all the problems. Some theoretical properties of the convergence of SDP-CUT are also discussed

    An Analytic Center Cutting Plane Method to Determine Complete Positivity of a Matrix

    Get PDF
    We propose an analytic center cutting plane method to determine if a matrix is completely positive, and return a cut that separates it from the completely positive cone if not. This was stated as an open (computational) problem by Berman, D\"ur, and Shaked-Monderer [Electronic Journal of Linear Algebra, 2015]. Our method optimizes over the intersection of a ball and the copositive cone, where membership is determined by solving a mixed-integer linear program suggested by Xia, Vera, and Zuluaga [INFORMS Journal on Computing, 2018]. Thus, our algorithm can, more generally, be used to solve any copositive optimization problem, provided one knows the radius of a ball containing an optimal solution. Numerical experiments show that the number of oracle calls (matrix copositivity checks) for our implementation scales well with the matrix size, growing roughly like O(d2)O(d^2) for d×dd\times d matrices. The method is implemented in Julia, and available at https://github.com/rileybadenbroek/CopositiveAnalyticCenter.jl.Comment: 16 pages, 1 figur

    A Statistical Learning Theory Approach for Uncertain Linear and Bilinear Matrix Inequalities

    Full text link
    In this paper, we consider the problem of minimizing a linear functional subject to uncertain linear and bilinear matrix inequalities, which depend in a possibly nonlinear way on a vector of uncertain parameters. Motivated by recent results in statistical learning theory, we show that probabilistic guaranteed solutions can be obtained by means of randomized algorithms. In particular, we show that the Vapnik-Chervonenkis dimension (VC-dimension) of the two problems is finite, and we compute upper bounds on it. In turn, these bounds allow us to derive explicitly the sample complexity of these problems. Using these bounds, in the second part of the paper, we derive a sequential scheme, based on a sequence of optimization and validation steps. The algorithm is on the same lines of recent schemes proposed for similar problems, but improves both in terms of complexity and generality. The effectiveness of this approach is shown using a linear model of a robot manipulator subject to uncertain parameters.Comment: 19 pages, 2 figures, Accepted for Publication in Automatic

    A Scalable Algorithm For Sparse Portfolio Selection

    Full text link
    The sparse portfolio selection problem is one of the most famous and frequently-studied problems in the optimization and financial economics literatures. In a universe of risky assets, the goal is to construct a portfolio with maximal expected return and minimum variance, subject to an upper bound on the number of positions, linear inequalities and minimum investment constraints. Existing certifiably optimal approaches to this problem do not converge within a practical amount of time at real world problem sizes with more than 400 securities. In this paper, we propose a more scalable approach. By imposing a ridge regularization term, we reformulate the problem as a convex binary optimization problem, which is solvable via an efficient outer-approximation procedure. We propose various techniques for improving the performance of the procedure, including a heuristic which supplies high-quality warm-starts, a preprocessing technique for decreasing the gap at the root node, and an analytic technique for strengthening our cuts. We also study the problem's Boolean relaxation, establish that it is second-order-cone representable, and supply a sufficient condition for its tightness. In numerical experiments, we establish that the outer-approximation procedure gives rise to dramatic speedups for sparse portfolio selection problems.Comment: Submitted to INFORMS Journal on Computin
    • …
    corecore