11 research outputs found
An Algorithm for Koml\'os Conjecture Matching Banaszczyk's bound
We consider the problem of finding a low discrepancy coloring for sparse set
systems where each element lies in at most t sets. We give an efficient
algorithm that finds a coloring with discrepancy O((t log n)^{1/2}), matching
the best known non-constructive bound for the problem due to Banaszczyk. The
previous algorithms only achieved an O(t^{1/2} log n) bound. The result also
extends to the more general Koml\'{o}s setting and gives an algorithmic
O(log^{1/2} n) bound
On a generalization of iterated and randomized rounding
We give a general method for rounding linear programs that combines the
commonly used iterated rounding and randomized rounding techniques. In
particular, we show that whenever iterated rounding can be applied to a problem
with some slack, there is a randomized procedure that returns an integral
solution that satisfies the guarantees of iterated rounding and also has
concentration properties. We use this to give new results for several classic
problems where iterated rounding has been useful
Improved Algorithmic Bounds for Discrepancy of Sparse Set Systems
We consider the problem of finding a low discrepancy coloring for sparse set
systems where each element lies in at most sets. We give an algorithm that
finds a coloring with discrepancy where is the
maximum cardinality of a set. This improves upon the previous constructive
bound of based on algorithmic variants of the partial
coloring method, and for small (e.g.) comes close to
the non-constructive bound due to Banaszczyk. Previously,
no algorithmic results better than were known even for . Our method is quite robust and we give several refinements and
extensions. For example, the coloring we obtain satisfies the stronger
size-sensitive property that each set in the set system incurs an discrepancy. Another variant can be used to
essentially match Banaszczyk's bound for a wide class of instances even where
is arbitrarily large. Finally, these results also extend directly to the
more general Koml\'{o}s setting
Dependent randomized rounding for clustering and partition systems with knapsack constraints
Clustering problems are fundamental to unsupervised learning. There is an
increased emphasis on fairness in machine learning and AI; one representative
notion of fairness is that no single demographic group should be
over-represented among the cluster-centers. This, and much more general
clustering problems, can be formulated with "knapsack" and "partition"
constraints. We develop new randomized algorithms targeting such problems, and
study two in particular: multi-knapsack median and multi-knapsack center. Our
rounding algorithms give new approximation and pseudo-approximation algorithms
for these problems. One key technical tool, which may be of independent
interest, is a new tail bound analogous to Feige (2006) for sums of random
variables with unbounded variances. Such bounds are very useful in inferring
properties of large networks using few samples
On a generalization of iterated and randomized rounding
We give a general method for rounding linear programs that combines the commonly used iterated rounding and randomized rounding techniques. In particular, we show that whenever iterated rounding can be applied to a problem with some slack, there is a randomized procedure that returns an integral solution that satisfies the guarantees of iterated rounding and also has concentration properties. We use this to give new results for several classic problems such as rounding column-sparse LPs, makespan minimization on unrelated machines, degree-bounded spanning trees and multi-budgeted matchings
The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues
A classic result of Banaszczyk (Random Str. & Algor. 1997) states that given any n vectors in Rm with ℓ2-norm at most 1 and any convex body K in Rm of Gaussian measure at least half, there exists a ±1 combination of these vectors that lies in 5K. Banaszczyk’s proof of this result was non-constructive and it was open how to find such a ±1 combination in polynomial time. In this paper, we give an efficient randomized algorithm to find a ±1 combination of the vectors which lies in cK for some fixed constant c > 0. This leads to new efficient algorithms for several problems in discrepancy theory
Approximation-friendly discrepancy rounding
Rounding linear programs using techniques from discrepancy is a recent approach that has been very successful in certain settings. However this method also has some limitations when compared to approaches such as randomized and iterative rounding. We provide an extension of the discrepancy-based rounding algorithm due to Lovett-Meka that (i) combines the advantages of both randomized and iterated rounding, (ii) makes it applicable to settings with more general combinatorial structure such as matroids. As applications of this approach, we obtain new results for various classical problems such as linear system rounding, degree-bounded matroid basis and low congestion routing
Approximation-friendly discrepancy rounding
Rounding linear programs using techniques from discrepancy is a recent approach that has been very successful in certain settings. However this method also has some limitationswhen compared to approaches such as randomized and iterative rounding.We provide an extension of the discrepancy-based rounding algorithmdue to Lovett-Meka that (i) combines the advantages of both randomized and iterated rounding, (ii) makes it applicable to settings with more general combinatorial structure such as matroids. As applications of this approach, we obtain new results for various classical problems such as linear system rounding, degree-bounded matroid basis and low congestion routing
Approximation-friendly discrepancy rounding
Rounding linear programs using techniques from discrepancy is a recent approach that has been very successful in certain settings. However this method also has some limitations when compared to approaches such as randomized and iterative rounding. We provide an extension of the discrepancy-based rounding algorithm due to Lovett-Meka that (i) combines the advantages of both randomized and iterated rounding, (ii) makes it applicable to settings with more general combinatorial structure such as matroids. As applications of this approach, we obtain new results for various classical problems such as linear system rounding, degree-bounded matroid basis and low congestion routing
Approximation-friendly discrepancy rounding
Rounding linear programs using techniques from discrepancy is a recent approach that has been very successful in certain settings. However this method also has some limitations when compared to approaches such as randomized and iterative rounding. We provide an extension of the discrepancy-based rounding algorithm due to Lovett-Meka that (i) combines the advantages of both randomized and iterated rounding, (ii) makes it applicable to settings with more general combinatorial structure such as matroids. As applications of this approach, we obtain new results for various classical problems such as linear system rounding, degree-bounded matroid basis and low congestion routing