1,948 research outputs found
On Learning Mixtures of Well-Separated Gaussians
We consider the problem of efficiently learning mixtures of a large number of
spherical Gaussians, when the components of the mixture are well separated. In
the most basic form of this problem, we are given samples from a uniform
mixture of standard spherical Gaussians, and the goal is to estimate the
means up to accuracy using samples.
In this work, we study the following question: what is the minimum separation
needed between the means for solving this task? The best known algorithm due to
Vempala and Wang [JCSS 2004] requires a separation of roughly
. On the other hand, Moitra and Valiant [FOCS 2010] showed
that with separation , exponentially many samples are required. We
address the significant gap between these two bounds, by showing the following
results.
1. We show that with separation , super-polynomially many
samples are required. In fact, this holds even when the means of the
Gaussians are picked at random in dimensions.
2. We show that with separation ,
samples suffice. Note that the bound on the separation is independent of
. This result is based on a new and efficient "accuracy boosting"
algorithm that takes as input coarse estimates of the true means and in time
outputs estimates of the means up to arbitrary accuracy
assuming the separation between the means is (independently of ).
We also present a computationally efficient algorithm in dimensions
with only separation. These results together essentially
characterize the optimal order of separation between components that is needed
to learn a mixture of spherical Gaussians with polynomial samples.Comment: Appeared in FOCS 2017. 55 pages, 1 figur
Bilu-Linial Stable Instances of Max Cut and Minimum Multiway Cut
We investigate the notion of stability proposed by Bilu and Linial. We obtain
an exact polynomial-time algorithm for -stable Max Cut instances with
for some absolute constant . Our
algorithm is robust: it never returns an incorrect answer; if the instance is
-stable, it finds the maximum cut, otherwise, it either finds the
maximum cut or certifies that the instance is not -stable. We prove
that there is no robust polynomial-time algorithm for -stable instances
of Max Cut when , where is the best
approximation factor for Sparsest Cut with non-uniform demands.
Our algorithm is based on semidefinite programming. We show that the standard
SDP relaxation for Max Cut (with triangle inequalities) is integral
if , where
is the least distortion with which every point metric space of negative
type embeds into . On the negative side, we show that the SDP
relaxation is not integral when .
Moreover, there is no tractable convex relaxation for -stable instances
of Max Cut when . That suggests that solving
-stable instances with might be difficult or
impossible.
Our results significantly improve previously known results. The best
previously known algorithm for -stable instances of Max Cut required
that (for some ) [Bilu, Daniely, Linial, and
Saks]. No hardness results were known for the problem. Additionally, we present
an algorithm for 4-stable instances of Minimum Multiway Cut. We also study a
relaxed notion of weak stability.Comment: 24 page
Block Stability for MAP Inference
To understand the empirical success of approximate MAP inference, recent work
(Lang et al., 2018) has shown that some popular approximation algorithms
perform very well when the input instance is stable. The simplest stability
condition assumes that the MAP solution does not change at all when some of the
pairwise potentials are (adversarially) perturbed. Unfortunately, this strong
condition does not seem to be satisfied in practice. In this paper, we
introduce a significantly more relaxed condition that only requires blocks
(portions) of an input instance to be stable. Under this block stability
condition, we prove that the pairwise LP relaxation is persistent on the stable
blocks. We complement our theoretical results with an empirical evaluation of
real-world MAP inference instances from computer vision. We design an algorithm
to find stable blocks, and find that these real instances have large stable
regions. Our work gives a theoretical explanation for the widespread empirical
phenomenon of persistency for this LP relaxation
- …