1,051 research outputs found

    An Algorithm for Koml\'os Conjecture Matching Banaszczyk's bound

    Get PDF
    We consider the problem of finding a low discrepancy coloring for sparse set systems where each element lies in at most t sets. We give an efficient algorithm that finds a coloring with discrepancy O((t log n)^{1/2}), matching the best known non-constructive bound for the problem due to Banaszczyk. The previous algorithms only achieved an O(t^{1/2} log n) bound. The result also extends to the more general Koml\'{o}s setting and gives an algorithmic O(log^{1/2} n) bound

    Approximate Hypergraph Coloring under Low-discrepancy and Related Promises

    Get PDF
    A hypergraph is said to be χ\chi-colorable if its vertices can be colored with χ\chi colors so that no hyperedge is monochromatic. 22-colorability is a fundamental property (called Property B) of hypergraphs and is extensively studied in combinatorics. Algorithmically, however, given a 22-colorable kk-uniform hypergraph, it is NP-hard to find a 22-coloring miscoloring fewer than a fraction 2k+12^{-k+1} of hyperedges (which is achieved by a random 22-coloring), and the best algorithms to color the hypergraph properly require n11/k\approx n^{1-1/k} colors, approaching the trivial bound of nn as kk increases. In this work, we study the complexity of approximate hypergraph coloring, for both the maximization (finding a 22-coloring with fewest miscolored edges) and minimization (finding a proper coloring using fewest number of colors) versions, when the input hypergraph is promised to have the following stronger properties than 22-colorability: (A) Low-discrepancy: If the hypergraph has discrepancy k\ell \ll \sqrt{k}, we give an algorithm to color the it with nO(2/k)\approx n^{O(\ell^2/k)} colors. However, for the maximization version, we prove NP-hardness of finding a 22-coloring miscoloring a smaller than 2O(k)2^{-O(k)} (resp. kO(k)k^{-O(k)}) fraction of the hyperedges when =O(logk)\ell = O(\log k) (resp. =2\ell=2). Assuming the UGC, we improve the latter hardness factor to 2O(k)2^{-O(k)} for almost discrepancy-11 hypergraphs. (B) Rainbow colorability: If the hypergraph has a (k)(k-\ell)-coloring such that each hyperedge is polychromatic with all these colors, we give a 22-coloring algorithm that miscolors at most kΩ(k)k^{-\Omega(k)} of the hyperedges when k\ell \ll \sqrt{k}, and complement this with a matching UG hardness result showing that when =k\ell =\sqrt{k}, it is hard to even beat the 2k+12^{-k+1} bound achieved by a random coloring.Comment: Approx 201

    Semidefinite optimization in discrepancy theory

    Get PDF
    Recently, there have been several new developments in discrepancy theory based on connections to semidefinite programming. This connection has been useful in several ways. It gives efficient polynomial time algorithms for several problems for which only non-constructive results were previously known. It also leads to several new structural results in discrepancy itself, such as tightness of the so-called determinant lower bound, improved bounds on the discrepancy of the union of set systems and so on. We will give a brief survey of these results, focussing on the main ideas and the techniques involved

    The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues

    Get PDF
    A classic result of Banaszczyk (Random Str. & Algor. 1997) states that given any n vectors in Rm with ℓ2-norm at most 1 and any convex body K in Rm of Gaussian measure at least half, there exists a ±1 combination of these vectors that lies in 5K. Banaszczyk’s proof of this result was non-constructive and it was open how to find such a ±1 combination in polynomial time. In this paper, we give an efficient randomized algorithm to find a ±1 combination of the vectors which lies in cK for some fixed constant c > 0. This leads to new efficient algorithms for several problems in discrepancy theory

    A Unified Approach to Discrepancy Minimization

    Get PDF
    We study a unified approach and algorithm for constructive discrepancy minimization based on a stochastic process. By varying the parameters of the process, one can recover various state-of-the-art results. We demonstrate the flexibility of the method by deriving a discrepancy bound for smoothed instances, which interpolates between known bounds for worst-case and random instances
    corecore