1,257 research outputs found
Towards a Constructive Version of Banaszczyk\u27s Vector Balancing Theorem
An important theorem of Banaszczyk (Random Structures & Algorithms 1998) states that for any sequence of vectors of l_2 norm at most 1/5 and any convex body K of Gaussian measure 1/2 in R^n, there exists a signed combination of these vectors which lands inside K. A major open problem is to devise a constructive version of Banaszczyk\u27s vector balancing theorem, i.e. to find an efficient algorithm which constructs the signed combination.
We make progress towards this goal along several fronts. As our first contribution, we show an equivalence between Banaszczyk\u27s theorem and the existence of O(1)-subgaussian distributions over signed combinations. For the case of symmetric convex bodies, our equivalence implies the existence of a universal signing algorithm (i.e. independent of the body), which simply samples from the subgaussian sign distribution and checks to see if the associated combination lands inside the body. For asymmetric convex bodies, we provide a novel recentering procedure, which allows us to reduce to the case where the body is symmetric.
As our second main contribution, we show that the above framework can be efficiently implemented when the vectors have length O(1/sqrt{log n}), recovering Banaszczyk\u27s results under this stronger assumption. More precisely, we use random walk techniques to produce the required O(1)-subgaussian signing distributions when the vectors have length O(1/sqrt{log n}), and use a stochastic gradient ascent method to implement the recentering procedure for asymmetric bodies
Online Discrepancy Minimization for Stochastic Arrivals
In the stochastic online vector balancing problem, vectors
chosen independently from an arbitrary distribution in
arrive one-by-one and must be immediately given a sign.
The goal is to keep the norm of the discrepancy vector, i.e., the signed
prefix-sum, as small as possible for a given target norm.
We consider some of the most well-known problems in discrepancy theory in the
above online stochastic setting, and give algorithms that match the known
offline bounds up to factors. This substantially
generalizes and improves upon the previous results of Bansal, Jiang, Singla,
and Sinha (STOC' 20). In particular, for the Koml\'{o}s problem where
for each , our algorithm achieves
discrepancy with high probability, improving upon the previous
bound. For Tusn\'{a}dy's problem of minimizing the
discrepancy of axis-aligned boxes, we obtain an bound for
arbitrary distribution over points. Previous techniques only worked for product
distributions and gave a weaker bound. We also consider the
Banaszczyk setting, where given a symmetric convex body with Gaussian
measure at least , our algorithm achieves discrepancy with
respect to the norm given by for input distributions with sub-exponential
tails.
Our key idea is to introduce a potential that also enforces constraints on
how the discrepancy vector evolves, allowing us to maintain certain
anti-concentration properties. For the Banaszczyk setting, we further enhance
this potential by combining it with ideas from generic chaining. Finally, we
also extend these results to the setting of online multi-color discrepancy
Robust 1-Bit Compressed Sensing via Hinge Loss Minimization
This work theoretically studies the problem of estimating a structured
high-dimensional signal from noisy -bit Gaussian
measurements. Our recovery approach is based on a simple convex program which
uses the hinge loss function as data fidelity term. While such a risk
minimization strategy is very natural to learn binary output models, such as in
classification, its capacity to estimate a specific signal vector is largely
unexplored. A major difficulty is that the hinge loss is just piecewise linear,
so that its "curvature energy" is concentrated in a single point. This is
substantially different from other popular loss functions considered in signal
estimation, e.g., the square or logistic loss, which are at least locally
strongly convex. It is therefore somewhat unexpected that we can still prove
very similar types of recovery guarantees for the hinge loss estimator, even in
the presence of strong noise. More specifically, our non-asymptotic error
bounds show that stable and robust reconstruction of can be achieved with
the optimal oversampling rate in terms of the number of
measurements . Moreover, we permit a wide class of structural assumptions on
the ground truth signal, in the sense that can belong to an arbitrary
bounded convex set . The proofs of our main results
rely on some recent advances in statistical learning theory due to Mendelson.
In particular, we invoke an adapted version of Mendelson's small ball method
that allows us to establish a quadratic lower bound on the error of the first
order Taylor approximation of the empirical hinge loss function
- …