11 research outputs found
Subexponential LPs Approximate Max-Cut
We show that for every , the degree-
Sherali-Adams linear program (with variables
and constraints) approximates the maximum cut problem within a factor of
, for some . Our
result provides a surprising converse to known lower bounds against all linear
programming relaxations of Max-Cut, and hence resolves the extension complexity
of approximate Max-Cut for approximation factors close to (up to
the function ). Previously, only semidefinite
programs and spectral methods were known to yield approximation factors better
than for Max-Cut in time . We also show that
constant-degree Sherali-Adams linear programs (with variables
and constraints) can solve Max-Cut with approximation factor close to on
graphs of small threshold rank: this is the first connection of which we are
aware between threshold rank and linear programming-based algorithms.
Our results separate the power of Sherali-Adams versus Lov\'asz-Schrijver
hierarchies for approximating Max-Cut, since it is known that
approximation of Max Cut requires
rounds in the Lov\'asz-Schrijver hierarchy.
We also provide a subexponential time approximation for Khot's Unique Games
problem: we show that for every the degree- Sherali-Adams linear program distinguishes instances of Unique Games
of value from instances of value , for
some , where is the alphabet size. Such
guarantees are qualitatively similar to those of previous subexponential-time
algorithms for Unique Games but our algorithm does not rely on semidefinite
programming or subspace enumeration techniques
Robustly Learning Mixtures of Arbitrary Gaussians
We give a polynomial-time algorithm for the problem of robustly estimating a
mixture of arbitrary Gaussians in , for any fixed , in the
presence of a constant fraction of arbitrary corruptions. This resolves the
main open problem in several previous works on algorithmic robust statistics,
which addressed the special cases of robustly estimating (a) a single Gaussian,
(b) a mixture of TV-distance separated Gaussians, and (c) a uniform mixture of
two Gaussians. Our main tools are an efficient \emph{partial clustering}
algorithm that relies on the sum-of-squares method, and a novel \emph{tensor
decomposition} algorithm that allows errors in both Frobenius norm and low-rank
terms.Comment: This version extends the previous one to yield 1) robust proper
learning algorithm with poly(eps) error and 2) an information theoretic
argument proving that the same algorithms in fact also yield parameter
recovery guarantees. The updates are included in Sections 7,8, and 9 and the
main result from the previous version (Thm 1.4) is presented and proved in
Section
Semialgebraic Proofs and Efficient Algorithm Design
Over the past several decades, an exciting interplay between proof systems and algorithms has emerged. Several prominent algorithms can be viewed as direct translations of proofs that a solution exists into an algorithm for finding that solution. Perhaps nowhere is this connection more prominent than in the context of semi-algebraic proof systems and large classes linear and semi-definite programs. The proof system perspective, in this context, has provided fundamentally new tools for both algorithm design and analysis. These news tools have helped in both designing better algorithms for well-studied problems and proving tight lower bounds on such techniques.
This talk will focus on this connection for the Sum-of-Squares proof system. In the first half, I will develop Sum-of-Squares both as a proof system and as a meta-algorithm. In doing so, I will discuss issues such as the duality between these two perspectives, and under what conditions Sum-of-Squares can be assumed to be automatizable. The second half of the talk will survey the landscape of Sum-of-Squares. This will include how Sum-of-Squares relates to other proof systems and to other semi-definite programs. As well, I will survey some of the applications of the connection between these two perspectives of Sum-of-Squares to the design of efficient algorithms for a variety of optimization problems.Non UBCUnreviewedAuthor affiliation: University of TorontoGraduat