8,672 research outputs found

    Subdeterminant Maximization via Nonconvex Relaxations and Anti-concentration

    Full text link
    Several fundamental problems that arise in optimization and computer science can be cast as follows: Given vectors v1,…,vm∈Rdv_1,\ldots,v_m \in \mathbb{R}^d and a constraint family B⊆2[m]{\cal B}\subseteq 2^{[m]}, find a set S∈BS \in \cal{B} that maximizes the squared volume of the simplex spanned by the vectors in SS. A motivating example is the data-summarization problem in machine learning where one is given a collection of vectors that represent data such as documents or images. The volume of a set of vectors is used as a measure of their diversity, and partition or matroid constraints over [m][m] are imposed in order to ensure resource or fairness constraints. Recently, Nikolov and Singh presented a convex program and showed how it can be used to estimate the value of the most diverse set when B{\cal B} corresponds to a partition matroid. This result was recently extended to regular matroids in works of Straszak and Vishnoi, and Anari and Oveis Gharan. The question of whether these estimation algorithms can be converted into the more useful approximation algorithms -- that also output a set -- remained open. The main contribution of this paper is to give the first approximation algorithms for both partition and regular matroids. We present novel formulations for the subdeterminant maximization problem for these matroids; this reduces them to the problem of finding a point that maximizes the absolute value of a nonconvex function over a Cartesian product of probability simplices. The technical core of our results is a new anti-concentration inequality for dependent random variables that allows us to relate the optimal value of these nonconvex functions to their value at a random point. Unlike prior work on the constrained subdeterminant maximization problem, our proofs do not rely on real-stability or convexity and could be of independent interest both in algorithms and complexity.Comment: in FOCS 201

    Bounded Independence Fools Degree-2 Threshold Functions

    Full text link
    Let x be a random vector coming from any k-wise independent distribution over {-1,1}^n. For an n-variate degree-2 polynomial p, we prove that E[sgn(p(x))] is determined up to an additive epsilon for k = poly(1/epsilon). This answers an open question of Diakonikolas et al. (FOCS 2009). Using standard constructions of k-wise independent distributions, we obtain a broad class of explicit generators that epsilon-fool the class of degree-2 threshold functions with seed length log(n)*poly(1/epsilon). Our approach is quite robust: it easily extends to yield that the intersection of any constant number of degree-2 threshold functions is epsilon-fooled by poly(1/epsilon)-wise independence. Our results also hold if the entries of x are k-wise independent standard normals, implying for example that bounded independence derandomizes the Goemans-Williamson hyperplane rounding scheme. To achieve our results, we introduce a technique we dub multivariate FT-mollification, a generalization of the univariate form introduced by Kane et al. (SODA 2010) in the context of streaming algorithms. Along the way we prove a generalized hypercontractive inequality for quadratic forms which takes the operator norm of the associated matrix into account. These techniques may be of independent interest.Comment: Using v1 numbering: removed Lemma G.5 from the Appendix (it was wrong). Net effect is that Theorem G.6 reduces the m^6 dependence of Theorem 8.1 to m^4, not m^
    • …
    corecore