16 research outputs found
Approximation Resistant Predicates From Pairwise Independence
We study the approximability of predicates on variables from a domain
, and give a new sufficient condition for such predicates to be
approximation resistant under the Unique Games Conjecture. Specifically, we
show that a predicate is approximation resistant if there exists a balanced
pairwise independent distribution over whose support is contained in
the set of satisfying assignments to
Gaussian Bounds for Noise Correlation of Functions
In this paper we derive tight bounds on the expected value of products of
{\em low influence} functions defined on correlated probability spaces. The
proofs are based on extending Fourier theory to an arbitrary number of
correlated probability spaces, on a generalization of an invariance principle
recently obtained with O'Donnell and Oleszkiewicz for multilinear polynomials
with low influences and bounded degree and on properties of multi-dimensional
Gaussian distributions. The results derived here have a number of applications
to the theory of social choice in economics, to hardness of approximation in
computer science and to additive combinatorics problems.Comment: Typos and references correcte
On the NP-Hardness of Approximating Ordering Constraint Satisfaction Problems
We show improved NP-hardness of approximating Ordering Constraint
Satisfaction Problems (OCSPs). For the two most well-studied OCSPs, Maximum
Acyclic Subgraph and Maximum Betweenness, we prove inapproximability of
and .
An OCSP is said to be approximation resistant if it is hard to approximate
better than taking a uniformly random ordering. We prove that the Maximum
Non-Betweenness Problem is approximation resistant and that there are width-
approximation-resistant OCSPs accepting only a fraction of
assignments. These results provide the first examples of
approximation-resistant OCSPs subject only to P \NP
On the Usefulness of Predicates
Motivated by the pervasiveness of strong inapproximability results for
Max-CSPs, we introduce a relaxed notion of an approximate solution of a
Max-CSP. In this relaxed version, loosely speaking, the algorithm is allowed to
replace the constraints of an instance by some other (possibly real-valued)
constraints, and then only needs to satisfy as many of the new constraints as
possible.
To be more precise, we introduce the following notion of a predicate
being \emph{useful} for a (real-valued) objective : given an almost
satisfiable Max- instance, there is an algorithm that beats a random
assignment on the corresponding Max- instance applied to the same sets of
literals. The standard notion of a nontrivial approximation algorithm for a
Max-CSP with predicate is exactly the same as saying that is useful for
itself.
We say that is useless if it is not useful for any . This turns out to
be equivalent to the following pseudo-randomness property: given an almost
satisfiable instance of Max- it is hard to find an assignment such that the
induced distribution on -bit strings defined by the instance is not
essentially uniform.
Under the Unique Games Conjecture, we give a complete and simple
characterization of useful Max-CSPs defined by a predicate: such a Max-CSP is
useless if and only if there is a pairwise independent distribution supported
on the satisfying assignments of the predicate. It is natural to also consider
the case when no negations are allowed in the CSP instance, and we derive a
similar complete characterization (under the UGC) there as well.
Finally, we also include some results and examples shedding additional light
on the approximability of certain Max-CSPs
Explicit Optimal Hardness via Gaussian stability results
The results of Raghavendra (2008) show that assuming Khot's Unique Games
Conjecture (2002), for every constraint satisfaction problem there exists a
generic semi-definite program that achieves the optimal approximation factor.
This result is existential as it does not provide an explicit optimal rounding
procedure nor does it allow to calculate exactly the Unique Games hardness of
the problem.
Obtaining an explicit optimal approximation scheme and the corresponding
approximation factor is a difficult challenge for each specific approximation
problem. An approach for determining the exact approximation factor and the
corresponding optimal rounding was established in the analysis of MAX-CUT (KKMO
2004) and the use of the Invariance Principle (MOO 2005). However, this
approach crucially relies on results explicitly proving optimal partitions in
Gaussian space. Until recently, Borell's result (Borell 1985) was the only
non-trivial Gaussian partition result known.
In this paper we derive the first explicit optimal approximation algorithm
and the corresponding approximation factor using a new result on Gaussian
partitions due to Isaksson and Mossel (2012). This Gaussian result allows us to
determine exactly the Unique Games Hardness of MAX-3-EQUAL. In particular, our
results show that Zwick algorithm for this problem achieves the optimal
approximation factor and prove that the approximation achieved by the algorithm
is as conjectured by Zwick.
We further use the previously known optimal Gaussian partitions results to
obtain a new Unique Games Hardness factor for MAX-k-CSP : Using the well known
fact that jointly normal pairwise independent random variables are fully
independent, we show that the the UGC hardness of Max-k-CSP is , improving on results of Austrin and Mossel (2009)
Near-Optimal UGC-hardness of Approximating Max k-CSP_R
In this paper, we prove an almost-optimal hardness for Max -CSP based
on Khot's Unique Games Conjecture (UGC). In Max -CSP, we are given a set
of predicates each of which depends on exactly variables. Each variable can
take any value from . The goal is to find an assignment to
variables that maximizes the number of satisfied predicates.
Assuming the Unique Games Conjecture, we show that it is NP-hard to
approximate Max -CSP to within factor for any . To the best of our knowledge, this result
improves on all the known hardness of approximation results when . In this case, the previous best hardness result was
NP-hardness of approximating within a factor by Chan. When , our result matches the best known UGC-hardness result of Khot, Kindler,
Mossel and O'Donnell.
In addition, by extending an algorithm for Max 2-CSP by Kindler, Kolla
and Trevisan, we provide an -approximation algorithm
for Max -CSP. This algorithm implies that our inapproximability result
is tight up to a factor of . In comparison,
when is a constant, the previously known gap was , which is
significantly larger than our gap of .
Finally, we show that we can replace the Unique Games Conjecture assumption
with Khot's -to-1 Conjecture and still get asymptotically the same hardness
of approximation
From average case complexity to improper learning complexity
The basic problem in the PAC model of computational learning theory is to
determine which hypothesis classes are efficiently learnable. There is
presently a dearth of results showing hardness of learning problems. Moreover,
the existing lower bounds fall short of the best known algorithms.
The biggest challenge in proving complexity results is to establish hardness
of {\em improper learning} (a.k.a. representation independent learning).The
difficulty in proving lower bounds for improper learning is that the standard
reductions from -hard problems do not seem to apply in this
context. There is essentially only one known approach to proving lower bounds
on improper learning. It was initiated in (Kearns and Valiant 89) and relies on
cryptographic assumptions.
We introduce a new technique for proving hardness of improper learning, based
on reductions from problems that are hard on average. We put forward a (fairly
strong) generalization of Feige's assumption (Feige 02) about the complexity of
refuting random constraint satisfaction problems. Combining this assumption
with our new technique yields far reaching implications. In particular,
1. Learning 's is hard.
2. Agnostically learning halfspaces with a constant approximation ratio is
hard.
3. Learning an intersection of halfspaces is hard.Comment: 34 page