373 research outputs found

    Approximation Limits of Linear Programs (Beyond Hierarchies)

    Full text link
    We develop a framework for approximation limits of polynomial-size linear programs from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any linear program as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n^{1/2-eps})-approximations for CLIQUE require linear programs of size 2^{n^\Omega(eps)}. (This lower bound applies to linear programs using a certain encoding of CLIQUE as a linear optimization problem.) Moreover, we establish a similar result for approximations of semidefinite programs by linear programs. Our main ingredient is a quantitative improvement of Razborov's rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of certain perturbations of the unique disjointness matrix.Comment: 23 pages, 2 figure

    Hierarchies of Relaxations for Online Prediction Problems with Evolving Constraints

    Get PDF
    We study online prediction where regret of the algorithm is measured against a benchmark defined via evolving constraints. This framework captures online prediction on graphs, as well as other prediction problems with combinatorial structure. A key aspect here is that finding the optimal benchmark predictor (even in hindsight, given all the data) might be computationally hard due to the combinatorial nature of the constraints. Despite this, we provide polynomial-time \emph{prediction} algorithms that achieve low regret against combinatorial benchmark sets. We do so by building improper learning algorithms based on two ideas that work together. The first is to alleviate part of the computational burden through random playout, and the second is to employ Lasserre semidefinite hierarchies to approximate the resulting integer program. Interestingly, for our prediction algorithms, we only need to compute the values of the semidefinite programs and not the rounded solutions. However, the integrality gap for Lasserre hierarchy \emph{does} enter the generic regret bound in terms of Rademacher complexity of the benchmark set. This establishes a trade-off between the computation time and the regret bound of the algorithm

    Approximating the Little Grothendieck Problem over the Orthogonal and Unitary Groups

    Get PDF
    The little Grothendieck problem consists of maximizing ijCijxixj\sum_{ij}C_{ij}x_ix_j over binary variables xi{±1}x_i\in\{\pm1\}, where C is a positive semidefinite matrix. In this paper we focus on a natural generalization of this problem, the little Grothendieck problem over the orthogonal group. Given C a dn x dn positive semidefinite matrix, the objective is to maximize ijTr(CijTOiOjT)\sum_{ij}Tr (C_{ij}^TO_iO_j^T) restricting OiO_i to take values in the group of orthogonal matrices, where CijC_{ij} denotes the (ij)-th d x d block of C. We propose an approximation algorithm, which we refer to as Orthogonal-Cut, to solve this problem and show a constant approximation ratio. Our method is based on semidefinite programming. For a given d1d\geq 1, we show a constant approximation ratio of αR(d)2\alpha_{R}(d)^2, where αR(d)\alpha_{R}(d) is the expected average singular value of a d x d matrix with random Gaussian N(0,1/d)N(0,1/d) i.i.d. entries. For d=1 we recover the known αR(1)2=2/π\alpha_{R}(1)^2=2/\pi approximation guarantee for the classical little Grothendieck problem. Our algorithm and analysis naturally extends to the complex valued case also providing a constant approximation ratio for the analogous problem over the Unitary Group. Orthogonal-Cut also serves as an approximation algorithm for several applications, including the Procrustes problem where it improves over the best previously known approximation ratio of~122\frac1{2\sqrt{2}}. The little Grothendieck problem falls under the class of problems approximated by a recent algorithm proposed in the context of the non-commutative Grothendieck inequality. Nonetheless, our approach is simpler and it provides a more efficient algorithm with better approximation ratios and matching integrality gaps. Finally, we also provide an improved approximation algorithm for the more general little Grothendieck problem over the orthogonal (or unitary) group with rank constraints.Comment: Updates in version 2: extension to the complex valued (unitary group) case, sharper lower bounds on the approximation ratios, matching integrality gap, and a generalized rank constrained version of the problem. Updates in version 3: Improvement on the expositio

    Subsampling Mathematical Relaxations and Average-case Complexity

    Full text link
    We initiate a study of when the value of mathematical relaxations such as linear and semidefinite programs for constraint satisfaction problems (CSPs) is approximately preserved when restricting the instance to a sub-instance induced by a small random subsample of the variables. Let CC be a family of CSPs such as 3SAT, Max-Cut, etc., and let Π\Pi be a relaxation for CC, in the sense that for every instance PCP\in C, Π(P)\Pi(P) is an upper bound the maximum fraction of satisfiable constraints of PP. Loosely speaking, we say that subsampling holds for CC and Π\Pi if for every sufficiently dense instance PCP \in C and every ϵ>0\epsilon>0, if we let PP' be the instance obtained by restricting PP to a sufficiently large constant number of variables, then Π(P)(1±ϵ)Π(P)\Pi(P') \in (1\pm \epsilon)\Pi(P). We say that weak subsampling holds if the above guarantee is replaced with Π(P)=1Θ(γ)\Pi(P')=1-\Theta(\gamma) whenever Π(P)=1γ\Pi(P)=1-\gamma. We show: 1. Subsampling holds for the BasicLP and BasicSDP programs. BasicSDP is a variant of the relaxation considered by Raghavendra (2008), who showed it gives an optimal approximation factor for every CSP under the unique games conjecture. BasicLP is the linear programming analog of BasicSDP. 2. For tighter versions of BasicSDP obtained by adding additional constraints from the Lasserre hierarchy, weak subsampling holds for CSPs of unique games type. 3. There are non-unique CSPs for which even weak subsampling fails for the above tighter semidefinite programs. Also there are unique CSPs for which subsampling fails for the Sherali-Adams linear programming hierarchy. As a corollary of our weak subsampling for strong semidefinite programs, we obtain a polynomial-time algorithm to certify that random geometric graphs (of the type considered by Feige and Schechtman, 2002) of max-cut value 1γ1-\gamma have a cut value at most 1γ/101-\gamma/10.Comment: Includes several more general results that subsume the previous version of the paper

    Rounding Sum-of-Squares Relaxations

    Get PDF
    We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-of-Squares proof system to transform a *combining algorithm* -- an algorithm that maps a distribution over solutions into a (possibly weaker) solution -- into a *rounding algorithm* that maps a solution of the relaxation to a solution of the original problem. Using this approach, we obtain algorithms that yield improved results for natural variants of three well-known problems: 1) We give a quasipolynomial-time algorithm that approximates the maximum of a low degree multivariate polynomial with non-negative coefficients over the Euclidean unit sphere. Beyond being of interest in its own right, this is related to an open question in quantum information theory, and our techniques have already led to improved results in this area (Brand\~{a}o and Harrow, STOC '13). 2) We give a polynomial-time algorithm that, given a d dimensional subspace of R^n that (almost) contains the characteristic function of a set of size n/k, finds a vector vv in the subspace satisfying v44>c(k/d1/3)v22|v|_4^4 > c(k/d^{1/3}) |v|_2^2, where vp=(Eivip)1/p|v|_p = (E_i v_i^p)^{1/p}. Aside from being a natural relaxation, this is also motivated by a connection to the Small Set Expansion problem shown by Barak et al. (STOC 2012) and our results yield a certain improvement for that problem. 3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time algorithm with substantially improved guarantees for recovering a planted μ\mu-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n nonzero coordinates, we can recover it with high probability whenever μ<O(min(1,n/d2))\mu < O(\min(1,n/d^2)), improving for d<n2/3d < n^{2/3} prior methods which intrinsically required μ<O(1/(d))\mu < O(1/\sqrt(d))
    corecore