28 research outputs found
On a generalization of iterated and randomized rounding
We give a general method for rounding linear programs that combines the
commonly used iterated rounding and randomized rounding techniques. In
particular, we show that whenever iterated rounding can be applied to a problem
with some slack, there is a randomized procedure that returns an integral
solution that satisfies the guarantees of iterated rounding and also has
concentration properties. We use this to give new results for several classic
problems where iterated rounding has been useful
Proximity results and faster algorithms for Integer Programming using the Steinitz Lemma
We consider integer programming problems in standard form where , and . We show that such an integer program can be solved in time , where is an upper bound on each
absolute value of an entry in . This improves upon the longstanding best
bound of Papadimitriou (1981) of , where in addition,
the absolute values of the entries of also need to be bounded by .
Our result relies on a lemma of Steinitz that states that a set of vectors in
that is contained in the unit ball of a norm and that sum up to zero can
be ordered such that all partial sums are of norm bounded by . We also use
the Steinitz lemma to show that the -distance of an optimal integer and
fractional solution, also under the presence of upper bounds on the variables,
is bounded by . Here is again an
upper bound on the absolute values of the entries of . The novel strength of
our bound is that it is independent of . We provide evidence for the
significance of our bound by applying it to general knapsack problems where we
obtain structural and algorithmic results that improve upon the recent
literature.Comment: We achieve much milder dependence of the running time on the largest
entry in $b
On a generalization of iterated and randomized rounding
We give a general method for rounding linear programs that combines the commonly used iterated rounding and randomized rounding techniques. In particular, we show that whenever iterated rounding can be applied to a problem with some slack, there is a randomized procedure that returns an integral solution that satisfies the guarantees of iterated rounding and also has concentration properties. We use this to give new results for several classic problems such as rounding column-sparse LPs, makespan minimization on unrelated machines, degree-bounded spanning trees and multi-budgeted matchings
The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues
A classic result of Banaszczyk (Random Str. & Algor. 1997) states that given any n vectors in Rm with ℓ2-norm at most 1 and any convex body K in Rm of Gaussian measure at least half, there exists a ±1 combination of these vectors that lies in 5K. Banaszczyk’s proof of this result was non-constructive and it was open how to find such a ±1 combination in polynomial time. In this paper, we give an efficient randomized algorithm to find a ±1 combination of the vectors which lies in cK for some fixed constant c > 0. This leads to new efficient algorithms for several problems in discrepancy theory
Approximating Bin Packing within O(log OPT * log log OPT) bins
For bin packing, the input consists of n items with sizes s_1,...,s_n in
[0,1] which have to be assigned to a minimum number of bins of size 1. The
seminal Karmarkar-Karp algorithm from '82 produces a solution with at most OPT
+ O(log^2 OPT) bins.
We provide the first improvement in now 3 decades and show that one can find
a solution of cost OPT + O(log OPT * log log OPT) in polynomial time. This is
achieved by rounding a fractional solution to the Gilmore-Gomory LP relaxation
using the Entropy Method from discrepancy theory. The result is constructive
via algorithms of Bansal and Lovett-Meka
Recommended from our members
Using and saving randomness
Randomness is ubiquitous and exceedingly useful in computer science. For example, in sparse recovery, randomized algorithms are more efficient and robust than their deterministic counterparts. At the same time, because random sources from the real world are often biased and defective with limited entropy, high-quality randomness is a precious resource. This motivates the studies of pseudorandomness and randomness extraction. In this thesis, we explore the role of randomness in these areas. Our research contributions broadly fall into two categories: learning structured signals and constructing pseudorandom objects. Learning a structured signal. One common task in audio signal processing is to compress an interval of observation through finding the dominating k frequencies in its Fourier transform. We study the problem of learning a Fourier-sparse signal from noisy samples, where [0, T] is the observation interval and the frequencies can be “off-grid”. Previous methods for this problem required the gap between frequencies to be above 1/T, which is necessary to robustly identify individual frequencies. We show that this gap is not necessary to recover the signal as a whole: for arbitrary k-Fourier-sparse signals under ℓ₂ bounded noise, we provide a learning algorithm with a constant factor growth of the noise and sample complexity polynomial in k and logarithmic in the bandwidth and signal-to-noise ratio. In addition to this, we introduce a general method to avoid a condition number depending on the signal family F and the distribution D of measurement in the sample vi complexity. In particular, for any linear family F with dimension d and any distribution D over the domain of F, we show that this method provides a robust learning algorithm with O(d log d) samples. Furthermore, we improve the sample complexity to O(d) via spectral sparsification (optimal up to a constant factor), which provides the best known result for a range of linear families such as low degree multivariate polynomials. Next, we generalize this result to an active learning setting, where we get a large number of unlabeled points from an unknown distribution and choose a small subset to label. We design a learning algorithm optimizing both the number of unlabeled points and the number of labels. Pseudorandomness. Next, we study hash families, which have simple forms in theory and efficient implementations in practice. The size of a hash family is crucial for many applications such as derandomization. In this thesis, we study the upper bound on the size of hash families to fulfill their applications in various problems. We first investigate the number of hash functions to constitute a randomness extractor, which is equivalent to the degree of the extractor. We present a general probabilistic method that reduces the degree of any given strong extractor to almost optimal, at least when outputting few bits. For various almost universal hash families including Toeplitz matrices, Linear Congruential Hash, and Multiplicative Universal Hash, this approach significantly improves the upper bound on the degree of strong extractors in these hash families. Then we consider explicit hash families and multiple-choice schemes in the classical problems of placing balls into bins. We construct explicit hash families of almost-polynomial size that derandomizes two classical multiple-choice schemes, which match the maximum loads of a perfectly random hash function.Computer Science