1,257 research outputs found

    Towards a Constructive Version of Banaszczyk\u27s Vector Balancing Theorem

    Get PDF
    An important theorem of Banaszczyk (Random Structures & Algorithms 1998) states that for any sequence of vectors of l_2 norm at most 1/5 and any convex body K of Gaussian measure 1/2 in R^n, there exists a signed combination of these vectors which lands inside K. A major open problem is to devise a constructive version of Banaszczyk\u27s vector balancing theorem, i.e. to find an efficient algorithm which constructs the signed combination. We make progress towards this goal along several fronts. As our first contribution, we show an equivalence between Banaszczyk\u27s theorem and the existence of O(1)-subgaussian distributions over signed combinations. For the case of symmetric convex bodies, our equivalence implies the existence of a universal signing algorithm (i.e. independent of the body), which simply samples from the subgaussian sign distribution and checks to see if the associated combination lands inside the body. For asymmetric convex bodies, we provide a novel recentering procedure, which allows us to reduce to the case where the body is symmetric. As our second main contribution, we show that the above framework can be efficiently implemented when the vectors have length O(1/sqrt{log n}), recovering Banaszczyk\u27s results under this stronger assumption. More precisely, we use random walk techniques to produce the required O(1)-subgaussian signing distributions when the vectors have length O(1/sqrt{log n}), and use a stochastic gradient ascent method to implement the recentering procedure for asymmetric bodies

    Online Discrepancy Minimization for Stochastic Arrivals

    Get PDF
    In the stochastic online vector balancing problem, vectors v1,v2,,vTv_1,v_2,\ldots,v_T chosen independently from an arbitrary distribution in Rn\mathbb{R}^n arrive one-by-one and must be immediately given a ±\pm sign. The goal is to keep the norm of the discrepancy vector, i.e., the signed prefix-sum, as small as possible for a given target norm. We consider some of the most well-known problems in discrepancy theory in the above online stochastic setting, and give algorithms that match the known offline bounds up to polylog(nT)\mathsf{polylog}(nT) factors. This substantially generalizes and improves upon the previous results of Bansal, Jiang, Singla, and Sinha (STOC' 20). In particular, for the Koml\'{o}s problem where vt21\|v_t\|_2\leq 1 for each tt, our algorithm achieves O~(1)\tilde{O}(1) discrepancy with high probability, improving upon the previous O~(n3/2)\tilde{O}(n^{3/2}) bound. For Tusn\'{a}dy's problem of minimizing the discrepancy of axis-aligned boxes, we obtain an O(logd+4T)O(\log^{d+4} T) bound for arbitrary distribution over points. Previous techniques only worked for product distributions and gave a weaker O(log2d+1T)O(\log^{2d+1} T) bound. We also consider the Banaszczyk setting, where given a symmetric convex body KK with Gaussian measure at least 1/21/2, our algorithm achieves O~(1)\tilde{O}(1) discrepancy with respect to the norm given by KK for input distributions with sub-exponential tails. Our key idea is to introduce a potential that also enforces constraints on how the discrepancy vector evolves, allowing us to maintain certain anti-concentration properties. For the Banaszczyk setting, we further enhance this potential by combining it with ideas from generic chaining. Finally, we also extend these results to the setting of online multi-color discrepancy

    Robust 1-Bit Compressed Sensing via Hinge Loss Minimization

    Full text link
    This work theoretically studies the problem of estimating a structured high-dimensional signal x0Rnx_0 \in \mathbb{R}^n from noisy 11-bit Gaussian measurements. Our recovery approach is based on a simple convex program which uses the hinge loss function as data fidelity term. While such a risk minimization strategy is very natural to learn binary output models, such as in classification, its capacity to estimate a specific signal vector is largely unexplored. A major difficulty is that the hinge loss is just piecewise linear, so that its "curvature energy" is concentrated in a single point. This is substantially different from other popular loss functions considered in signal estimation, e.g., the square or logistic loss, which are at least locally strongly convex. It is therefore somewhat unexpected that we can still prove very similar types of recovery guarantees for the hinge loss estimator, even in the presence of strong noise. More specifically, our non-asymptotic error bounds show that stable and robust reconstruction of x0x_0 can be achieved with the optimal oversampling rate O(m1/2)O(m^{-1/2}) in terms of the number of measurements mm. Moreover, we permit a wide class of structural assumptions on the ground truth signal, in the sense that x0x_0 can belong to an arbitrary bounded convex set KRnK \subset \mathbb{R}^n. The proofs of our main results rely on some recent advances in statistical learning theory due to Mendelson. In particular, we invoke an adapted version of Mendelson's small ball method that allows us to establish a quadratic lower bound on the error of the first order Taylor approximation of the empirical hinge loss function
    corecore