4,334 research outputs found
On a generalization of iterated and randomized rounding
We give a general method for rounding linear programs that combines the
commonly used iterated rounding and randomized rounding techniques. In
particular, we show that whenever iterated rounding can be applied to a problem
with some slack, there is a randomized procedure that returns an integral
solution that satisfies the guarantees of iterated rounding and also has
concentration properties. We use this to give new results for several classic
problems where iterated rounding has been useful
Approximating Bin Packing within O(log OPT * log log OPT) bins
For bin packing, the input consists of n items with sizes s_1,...,s_n in
[0,1] which have to be assigned to a minimum number of bins of size 1. The
seminal Karmarkar-Karp algorithm from '82 produces a solution with at most OPT
+ O(log^2 OPT) bins.
We provide the first improvement in now 3 decades and show that one can find
a solution of cost OPT + O(log OPT * log log OPT) in polynomial time. This is
achieved by rounding a fractional solution to the Gilmore-Gomory LP relaxation
using the Entropy Method from discrepancy theory. The result is constructive
via algorithms of Bansal and Lovett-Meka
Rounding Sum-of-Squares Relaxations
We present a general approach to rounding semidefinite programming
relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our
approach is based on using the connection between these relaxations and the
Sum-of-Squares proof system to transform a *combining algorithm* -- an
algorithm that maps a distribution over solutions into a (possibly weaker)
solution -- into a *rounding algorithm* that maps a solution of the relaxation
to a solution of the original problem.
Using this approach, we obtain algorithms that yield improved results for
natural variants of three well-known problems:
1) We give a quasipolynomial-time algorithm that approximates the maximum of
a low degree multivariate polynomial with non-negative coefficients over the
Euclidean unit sphere. Beyond being of interest in its own right, this is
related to an open question in quantum information theory, and our techniques
have already led to improved results in this area (Brand\~{a}o and Harrow, STOC
'13).
2) We give a polynomial-time algorithm that, given a d dimensional subspace
of R^n that (almost) contains the characteristic function of a set of size n/k,
finds a vector in the subspace satisfying ,
where . Aside from being a natural relaxation, this
is also motivated by a connection to the Small Set Expansion problem shown by
Barak et al. (STOC 2012) and our results yield a certain improvement for that
problem.
3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time
algorithm with substantially improved guarantees for recovering a planted
-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n
nonzero coordinates, we can recover it with high probability whenever , improving for prior methods which
intrinsically required
Near-linear time approximation algorithms for optimal transport via Sinkhorn iteration
Computing optimal transport distances such as the earth mover's distance is a
fundamental problem in machine learning, statistics, and computer vision.
Despite the recent introduction of several algorithms with good empirical
performance, it is unknown whether general optimal transport distances can be
approximated in near-linear time. This paper demonstrates that this ambitious
goal is in fact achieved by Cuturi's Sinkhorn Distances. This result relies on
a new analysis of Sinkhorn iteration, which also directly suggests a new greedy
coordinate descent algorithm, Greenkhorn, with the same theoretical guarantees.
Numerical simulations illustrate that Greenkhorn significantly outperforms the
classical Sinkhorn algorithm in practice
- …