18,594 research outputs found

    Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations

    Full text link
    It is well-known that any sum of squares (SOS) program can be cast as a semidefinite program (SDP) of a particular structure and that therein lies the computational bottleneck for SOS programs, as the SDPs generated by this procedure are large and costly to solve when the polynomials involved in the SOS programs have a large number of variables and degree. In this paper, we review SOS optimization techniques and present two new methods for improving their computational efficiency. The first method leverages the sparsity of the underlying SDP to obtain computational speed-ups. Further improvements can be obtained if the coefficients of the polynomials that describe the problem have a particular sparsity pattern, called chordal sparsity. The second method bypasses semidefinite programming altogether and relies instead on solving a sequence of more tractable convex programs, namely linear and second order cone programs. This opens up the question as to how well one can approximate the cone of SOS polynomials by second order representable cones. In the last part of the paper, we present some recent negative results related to this question.Comment: Tutorial for CDC 201

    Positive Semidefinite Metric Learning with Boosting

    Full text link
    The learning of appropriate distance metrics is a critical problem in image classification and retrieval. In this work, we propose a boosting-based technique, termed \BoostMetric, for learning a Mahalanobis distance metric. One of the primary difficulties in learning such a metric is to ensure that the Mahalanobis matrix remains positive semidefinite. Semidefinite programming is sometimes used to enforce this constraint, but does not scale well. \BoostMetric is instead based on a key observation that any positive semidefinite matrix can be decomposed into a linear positive combination of trace-one rank-one matrices. \BoostMetric thus uses rank-one positive semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting method is easy to implement, does not require tuning, and can accommodate various types of constraints. Experiments on various datasets show that the proposed algorithm compares favorably to those state-of-the-art methods in terms of classification accuracy and running time.Comment: 11 pages, Twenty-Third Annual Conference on Neural Information Processing Systems (NIPS 2009), Vancouver, Canad
    • …
    corecore