22,174 research outputs found

    Support-based lower bounds for the positive semidefinite rank of a nonnegative matrix

    Full text link
    The positive semidefinite rank of a nonnegative (m×n)(m\times n)-matrix~SS is the minimum number~qq such that there exist positive semidefinite (q×q)(q\times q)-matrices A1,…,AmA_1,\dots,A_m, B1,…,BnB_1,\dots,B_n such that S(k,\ell) = \mbox{tr}(A_k^* B_\ell). The most important, lower bound technique for nonnegative rank is solely based on the support of the matrix S, i.e., its zero/non-zero pattern. In this paper, we characterize the power of lower bounds on positive semidefinite rank based on solely on the support.Comment: 9 page

    Polytopes of Minimum Positive Semidefinite Rank

    Full text link
    The positive semidefinite (psd) rank of a polytope is the smallest kk for which the cone of k×kk \times k real symmetric psd matrices admits an affine slice that projects onto the polytope. In this paper we show that the psd rank of a polytope is at least the dimension of the polytope plus one, and we characterize those polytopes whose psd rank equals this lower bound. We give several classes of polytopes that achieve the minimum possible psd rank including a complete characterization in dimensions two and three

    A Class of Semidefinite Programs with rank-one solutions

    Get PDF
    We show that a class of semidefinite programs (SDP) admits a solution that is a positive semidefinite matrix of rank at most rr, where rr is the rank of the matrix involved in the objective function of the SDP. The optimization problems of this class are semidefinite packing problems, which are the SDP analogs to vector packing problems. Of particular interest is the case in which our result guarantees the existence of a solution of rank one: we show that the computation of this solution actually reduces to a Second Order Cone Program (SOCP). We point out an application in statistics, in the optimal design of experiments.Comment: 16 page

    Regression on fixed-rank positive semidefinite matrices: a Riemannian approach

    Full text link
    The paper addresses the problem of learning a regression model parameterized by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear nature of the search space and on scalability to high-dimensional problems. The mathematical developments rely on the theory of gradient descent algorithms adapted to the Riemannian geometry that underlies the set of fixed-rank positive semidefinite matrices. In contrast with previous contributions in the literature, no restrictions are imposed on the range space of the learned matrix. The resulting algorithms maintain a linear complexity in the problem size and enjoy important invariance properties. We apply the proposed algorithms to the problem of learning a distance function parameterized by a positive semidefinite matrix. Good performance is observed on classical benchmarks

    Positive Semidefinite Metric Learning with Boosting

    Full text link
    The learning of appropriate distance metrics is a critical problem in image classification and retrieval. In this work, we propose a boosting-based technique, termed \BoostMetric, for learning a Mahalanobis distance metric. One of the primary difficulties in learning such a metric is to ensure that the Mahalanobis matrix remains positive semidefinite. Semidefinite programming is sometimes used to enforce this constraint, but does not scale well. \BoostMetric is instead based on a key observation that any positive semidefinite matrix can be decomposed into a linear positive combination of trace-one rank-one matrices. \BoostMetric thus uses rank-one positive semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting method is easy to implement, does not require tuning, and can accommodate various types of constraints. Experiments on various datasets show that the proposed algorithm compares favorably to those state-of-the-art methods in terms of classification accuracy and running time.Comment: 11 pages, Twenty-Third Annual Conference on Neural Information Processing Systems (NIPS 2009), Vancouver, Canad

    Some upper and lower bounds on PSD-rank

    Get PDF
    Positive semidefinite rank (PSD-rank) is a relatively new quantity with applications to combinatorial optimization and communication complexity. We first study several basic properties of PSD-rank, and then develop new techniques for showing lower bounds on the PSD-rank. All of these bounds are based on viewing a positive semidefinite factorization of a matrix MM as a quantum communication protocol. These lower bounds depend on the entries of the matrix and not only on its support (the zero/nonzero pattern), overcoming a limitation of some previous techniques. We compare these new lower bounds with known bounds, and give examples where the new ones are better. As an application we determine the PSD-rank of (approximations of) some common matrices.Comment: 21 page

    Lower bounds on matrix factorization ranks via noncommutative polynomial optimization

    Get PDF
    We use techniques from (tracial noncommutative) polynomial optimization to formulate hierarchies of semidefinite programming lower bounds on matrix factorization ranks. In particular, we consider the nonnegative rank, the completely positive rank, and their symmetric analogues: the positive semidefinite rank and the completely positive semidefinite rank. We study the convergence properties of our hierarchies, compare them extensively to known lower bounds, and provide some (numerical) examples

    The positive semidefinite Grothendieck problem with rank constraint

    Full text link
    Given a positive integer n and a positive semidefinite matrix A = (A_{ij}) of size m x m, the positive semidefinite Grothendieck problem with rank-n-constraint (SDP_n) is maximize \sum_{i=1}^m \sum_{j=1}^m A_{ij} x_i \cdot x_j, where x_1, ..., x_m \in S^{n-1}. In this paper we design a polynomial time approximation algorithm for SDP_n achieving an approximation ratio of \gamma(n) = \frac{2}{n}(\frac{\Gamma((n+1)/2)}{\Gamma(n/2)})^2 = 1 - \Theta(1/n). We show that under the assumption of the unique games conjecture the achieved approximation ratio is optimal: There is no polynomial time algorithm which approximates SDP_n with a ratio greater than \gamma(n). We improve the approximation ratio of the best known polynomial time algorithm for SDP_1 from 2/\pi to 2/(\pi\gamma(m)) = 2/\pi + \Theta(1/m), and we show a tighter approximation ratio for SDP_n when A is the Laplacian matrix of a graph with nonnegative edge weights.Comment: (v3) to appear in Proceedings of the 37th International Colloquium on Automata, Languages and Programming, 12 page
    • …
    corecore