4 research outputs found

    Online Discrepancy Minimization for Stochastic Arrivals

    Get PDF
    In the stochastic online vector balancing problem, vectors v1,v2,,vTv_1,v_2,\ldots,v_T chosen independently from an arbitrary distribution in Rn\mathbb{R}^n arrive one-by-one and must be immediately given a ±\pm sign. The goal is to keep the norm of the discrepancy vector, i.e., the signed prefix-sum, as small as possible for a given target norm. We consider some of the most well-known problems in discrepancy theory in the above online stochastic setting, and give algorithms that match the known offline bounds up to polylog(nT)\mathsf{polylog}(nT) factors. This substantially generalizes and improves upon the previous results of Bansal, Jiang, Singla, and Sinha (STOC' 20). In particular, for the Koml\'{o}s problem where vt21\|v_t\|_2\leq 1 for each tt, our algorithm achieves O~(1)\tilde{O}(1) discrepancy with high probability, improving upon the previous O~(n3/2)\tilde{O}(n^{3/2}) bound. For Tusn\'{a}dy's problem of minimizing the discrepancy of axis-aligned boxes, we obtain an O(logd+4T)O(\log^{d+4} T) bound for arbitrary distribution over points. Previous techniques only worked for product distributions and gave a weaker O(log2d+1T)O(\log^{2d+1} T) bound. We also consider the Banaszczyk setting, where given a symmetric convex body KK with Gaussian measure at least 1/21/2, our algorithm achieves O~(1)\tilde{O}(1) discrepancy with respect to the norm given by KK for input distributions with sub-exponential tails. Our key idea is to introduce a potential that also enforces constraints on how the discrepancy vector evolves, allowing us to maintain certain anti-concentration properties. For the Banaszczyk setting, we further enhance this potential by combining it with ideas from generic chaining. Finally, we also extend these results to the setting of online multi-color discrepancy

    Online discrepancy minimization for stochastic arrivals

    Get PDF
    In the stochastic online vector balancing problem, vectors v1, v2,..., vT chosen independently from an arbitrary distribution in Rn arrive one-by-one and must be immediately given a ± sign. The goal is to keep the norm of the discrepancy vector, i.e., the signed prefix-sum, as small as possible for a given target norm. We consider some of the most well-known problems in discrepancy theory in the above online stochastic setting, and give algorithms that match the known offline bounds up to polylog(nT) factors. This substantially generalizes and improves upon the previous results of Bansal, Jiang, Singla, and Sinha (STOC' 20). In particular, for the Komlós problem where kvtk2 ≤ 1 for each t, our algorithm achieves Oe(1) discrepancy with high probability, improving upon the previous Oe(n3/2) bound. For Tusnády's problem of minimizing the discrepancy of axis-aligned boxes, we obtain an O(logd+4 T) bound for arbitrary distribution over points. Previous techniques only worked for product distributions and gave a weaker O(log2d+1 T) bound. We also consider the Banaszczyk setting, where given a symmetric convex body K with Gaussian measure at least 1/2, our algorithm achieves Oe(1) discrepancy with respect to the norm given by K for input distributions with sub-exponential tails. Our results are based on a new potential function approach. Previous techniques consider a potential that penalizes large discrepancy, and greedily chooses the next color to minimize the increase in potential. Our key idea is to introduce a potential that also enforces constraints on how the discrepancy vector evolves, allowing us to maintain certain anti-concentration properties. We believe that our techniques to control the evolution of states could find other applications in stochastic processes and online algorithms. For the Banaszczyk setting, we further enhance this potential by combining it with ideas from generic chaining. Finally, we also extend these results to the setting of online multicolor discrepancy

    Online discrepancy minimization for stochastic arrivals

    Get PDF
    In the stochastic online vector balancing problem, vectors v1,v2,…,vT chosen independently from an arbitrary distribution in Rn arrive one-by-one and must be immediately given a ± sign. The goal is to keep the norm of the discrepancy vector, i.e., the signed prefix-sum, as small as possible for a given target norm. We consider some of the most well-known problems in discrepancy theory in the above online stochastic setting, and give algorithms that match the known offline bounds up to polylog(nT) factors. This substantially generalizes and improves upon the previous results of Bansal, Jiang, Singla, and Sinha (STOC' 20). In particular, for the Komlos problem where ∥v_t∥_2≤1 for each t, our algorithm achieves ˜O(1) discrepancy with high probability, improving upon the previous ˜O(n3/2) bound. For Tusnády's problem of minimizing the discrepancy of axis-aligned boxes, we obtain an O(log^{d+4}T) bound for arbitrary distribution over points. Previous techniques only worked for product distributions and gave a weaker O(log^{2d+1}T) bound. We also consider the Banaszczyk setting, where given a symmetric convex body K with Gaussian measure at least 1/2, our algorithm achieves \tilde{O}(1) discrepancy with respect to the norm given by K for input distributions with sub-exponential tails. Our results are based on a new potential function approach. Previous techniques consider a potential that penalizes large discrepancy, and greedily chooses the next color to minimize the increase in potential. Our key idea is to introduce a potential that also enforces constraints on how the discrepancy vector evolves, allowing us to maintain certain anti-concentration properties. We believe that our techniques to control the evolution of states could find other applications in stochastic processes and online algorithms. For the Banaszczyk setting, we further enhance this potential by combining it with ideas from generic chaining. Finally, we also extend these results to the setting of online multi-color discrepancy.</p
    corecore