615 research outputs found

    Approximate Hypergraph Coloring under Low-discrepancy and Related Promises

    Get PDF
    A hypergraph is said to be χ\chi-colorable if its vertices can be colored with χ\chi colors so that no hyperedge is monochromatic. 22-colorability is a fundamental property (called Property B) of hypergraphs and is extensively studied in combinatorics. Algorithmically, however, given a 22-colorable kk-uniform hypergraph, it is NP-hard to find a 22-coloring miscoloring fewer than a fraction 2k+12^{-k+1} of hyperedges (which is achieved by a random 22-coloring), and the best algorithms to color the hypergraph properly require n11/k\approx n^{1-1/k} colors, approaching the trivial bound of nn as kk increases. In this work, we study the complexity of approximate hypergraph coloring, for both the maximization (finding a 22-coloring with fewest miscolored edges) and minimization (finding a proper coloring using fewest number of colors) versions, when the input hypergraph is promised to have the following stronger properties than 22-colorability: (A) Low-discrepancy: If the hypergraph has discrepancy k\ell \ll \sqrt{k}, we give an algorithm to color the it with nO(2/k)\approx n^{O(\ell^2/k)} colors. However, for the maximization version, we prove NP-hardness of finding a 22-coloring miscoloring a smaller than 2O(k)2^{-O(k)} (resp. kO(k)k^{-O(k)}) fraction of the hyperedges when =O(logk)\ell = O(\log k) (resp. =2\ell=2). Assuming the UGC, we improve the latter hardness factor to 2O(k)2^{-O(k)} for almost discrepancy-11 hypergraphs. (B) Rainbow colorability: If the hypergraph has a (k)(k-\ell)-coloring such that each hyperedge is polychromatic with all these colors, we give a 22-coloring algorithm that miscolors at most kΩ(k)k^{-\Omega(k)} of the hyperedges when k\ell \ll \sqrt{k}, and complement this with a matching UG hardness result showing that when =k\ell =\sqrt{k}, it is hard to even beat the 2k+12^{-k+1} bound achieved by a random coloring.Comment: Approx 201

    Online Discrepancy Minimization for Stochastic Arrivals

    Get PDF
    In the stochastic online vector balancing problem, vectors v1,v2,,vTv_1,v_2,\ldots,v_T chosen independently from an arbitrary distribution in Rn\mathbb{R}^n arrive one-by-one and must be immediately given a ±\pm sign. The goal is to keep the norm of the discrepancy vector, i.e., the signed prefix-sum, as small as possible for a given target norm. We consider some of the most well-known problems in discrepancy theory in the above online stochastic setting, and give algorithms that match the known offline bounds up to polylog(nT)\mathsf{polylog}(nT) factors. This substantially generalizes and improves upon the previous results of Bansal, Jiang, Singla, and Sinha (STOC' 20). In particular, for the Koml\'{o}s problem where vt21\|v_t\|_2\leq 1 for each tt, our algorithm achieves O~(1)\tilde{O}(1) discrepancy with high probability, improving upon the previous O~(n3/2)\tilde{O}(n^{3/2}) bound. For Tusn\'{a}dy's problem of minimizing the discrepancy of axis-aligned boxes, we obtain an O(logd+4T)O(\log^{d+4} T) bound for arbitrary distribution over points. Previous techniques only worked for product distributions and gave a weaker O(log2d+1T)O(\log^{2d+1} T) bound. We also consider the Banaszczyk setting, where given a symmetric convex body KK with Gaussian measure at least 1/21/2, our algorithm achieves O~(1)\tilde{O}(1) discrepancy with respect to the norm given by KK for input distributions with sub-exponential tails. Our key idea is to introduce a potential that also enforces constraints on how the discrepancy vector evolves, allowing us to maintain certain anti-concentration properties. For the Banaszczyk setting, we further enhance this potential by combining it with ideas from generic chaining. Finally, we also extend these results to the setting of online multi-color discrepancy

    Binary perceptrons capacity via fully lifted random duality theory

    Full text link
    We study the statistical capacity of the classical binary perceptrons with general thresholds κ\kappa. After recognizing the connection between the capacity and the bilinearly indexed (bli) random processes, we utilize a recent progress in studying such processes to characterize the capacity. In particular, we rely on \emph{fully lifted} random duality theory (fl RDT) established in \cite{Stojnicflrdt23} to create a general framework for studying the perceptrons' capacities. Successful underlying numerical evaluations are required for the framework (and ultimately the entire fl RDT machinery) to become fully practically operational. We present results obtained in that directions and uncover that the capacity characterizations are achieved on the second (first non-trivial) level of \emph{stationarized} full lifting. The obtained results \emph{exactly} match the replica symmetry breaking predictions obtained through statistical physics replica methods in \cite{KraMez89}. Most notably, for the famous zero-threshold scenario, κ=0\kappa=0, we uncover the well known α0.8330786\alpha\approx0.8330786 scaled capacity

    Euclidean distance geometry and applications

    Full text link
    Euclidean distance geometry is the study of Euclidean geometry based on the concept of distance. This is useful in several applications where the input data consists of an incomplete set of distances, and the output is a set of points in Euclidean space that realizes the given distances. We survey some of the theory of Euclidean distance geometry and some of the most important applications: molecular conformation, localization of sensor networks and statics.Comment: 64 pages, 21 figure

    Online discrepancy minimization for stochastic arrivals

    Get PDF
    In the stochastic online vector balancing problem, vectors v1, v2,..., vT chosen independently from an arbitrary distribution in Rn arrive one-by-one and must be immediately given a ± sign. The goal is to keep the norm of the discrepancy vector, i.e., the signed prefix-sum, as small as possible for a given target norm. We consider some of the most well-known problems in discrepancy theory in the above online stochastic setting, and give algorithms that match the known offline bounds up to polylog(nT) factors. This substantially generalizes and improves upon the previous results of Bansal, Jiang, Singla, and Sinha (STOC' 20). In particular, for the Komlós problem where kvtk2 ≤ 1 for each t, our algorithm achieves Oe(1) discrepancy with high probability, improving upon the previous Oe(n3/2) bound. For Tusnády's problem of minimizing the discrepancy of axis-aligned boxes, we obtain an O(logd+4 T) bound for arbitrary distribution over points. Previous techniques only worked for product distributions and gave a weaker O(log2d+1 T) bound. We also consider the Banaszczyk setting, where given a symmetric convex body K with Gaussian measure at least 1/2, our algorithm achieves Oe(1) discrepancy with respect to the norm given by K for input distributions with sub-exponential tails. Our results are based on a new potential function approach. Previous techniques consider a potential that penalizes large discrepancy, and greedily chooses the next color to minimize the increase in potential. Our key idea is to introduce a potential that also enforces constraints on how the discrepancy vector evolves, allowing us to maintain certain anti-concentration properties. We believe that our techniques to control the evolution of states could find other applications in stochastic processes and online algorithms. For the Banaszczyk setting, we further enhance this potential by combining it with ideas from generic chaining. Finally, we also extend these results to the setting of online multicolor discrepancy
    corecore