330 research outputs found
Robustly Solvable Constraint Satisfaction Problems
An algorithm for a constraint satisfaction problem is called robust if it
outputs an assignment satisfying at least -fraction of the
constraints given a -satisfiable instance, where
as . Guruswami and
Zhou conjectured a characterization of constraint languages for which the
corresponding constraint satisfaction problem admits an efficient robust
algorithm. This paper confirms their conjecture
Galois correspondence for counting quantifiers
We introduce a new type of closure operator on the set of relations,
max-implementation, and its weaker analog max-quantification. Then we show that
approximation preserving reductions between counting constraint satisfaction
problems (#CSPs) are preserved by these two types of closure operators.
Together with some previous results this means that the approximation
complexity of counting CSPs is determined by partial clones of relations that
additionally closed under these new types of closure operators. Galois
correspondence of various kind have proved to be quite helpful in the study of
the complexity of the CSP. While we were unable to identify a Galois
correspondence for partial clones closed under max-implementation and
max-quantification, we obtain such results for slightly different type of
closure operators, k-existential quantification. This type of quantifiers are
known as counting quantifiers in model theory, and often used to enhance first
order logic languages. We characterize partial clones of relations closed under
k-existential quantification as sets of relations invariant under a set of
partial functions that satisfy the condition of k-subset surjectivity. Finally,
we give a description of Boolean max-co-clones, that is, sets of relations on
{0,1} closed under max-implementations.Comment: 28 pages, 2 figure
Entropy landscape and non-Gibbs solutions in constraint satisfaction problems
We study the entropy landscape of solutions for the bicoloring problem in
random graphs, a representative difficult constraint satisfaction problem. Our
goal is to classify which type of clusters of solutions are addressed by
different algorithms. In the first part of the study we use the cavity method
to obtain the number of clusters with a given internal entropy and determine
the phase diagram of the problem, e.g. dynamical, rigidity and SAT-UNSAT
transitions. In the second part of the paper we analyze different algorithms
and locate their behavior in the entropy landscape of the problem. For instance
we show that a smoothed version of a decimation strategy based on Belief
Propagation is able to find solutions belonging to sub-dominant clusters even
beyond the so called rigidity transition where the thermodynamically relevant
clusters become frozen. These non-equilibrium solutions belong to the most
probable unfrozen clusters.Comment: 38 pages, 10 figure
Sketching Cuts in Graphs and Hypergraphs
Sketching and streaming algorithms are in the forefront of current research
directions for cut problems in graphs. In the streaming model, we show that
-approximation for Max-Cut must use space;
moreover, beating -approximation requires polynomial space. For the
sketching model, we show that -uniform hypergraphs admit a
-cut-sparsifier (i.e., a weighted subhypergraph that
approximately preserves all the cuts) with
edges. We also make first steps towards sketching general CSPs (Constraint
Satisfaction Problems)
On the Usefulness of Predicates
Motivated by the pervasiveness of strong inapproximability results for
Max-CSPs, we introduce a relaxed notion of an approximate solution of a
Max-CSP. In this relaxed version, loosely speaking, the algorithm is allowed to
replace the constraints of an instance by some other (possibly real-valued)
constraints, and then only needs to satisfy as many of the new constraints as
possible.
To be more precise, we introduce the following notion of a predicate
being \emph{useful} for a (real-valued) objective : given an almost
satisfiable Max- instance, there is an algorithm that beats a random
assignment on the corresponding Max- instance applied to the same sets of
literals. The standard notion of a nontrivial approximation algorithm for a
Max-CSP with predicate is exactly the same as saying that is useful for
itself.
We say that is useless if it is not useful for any . This turns out to
be equivalent to the following pseudo-randomness property: given an almost
satisfiable instance of Max- it is hard to find an assignment such that the
induced distribution on -bit strings defined by the instance is not
essentially uniform.
Under the Unique Games Conjecture, we give a complete and simple
characterization of useful Max-CSPs defined by a predicate: such a Max-CSP is
useless if and only if there is a pairwise independent distribution supported
on the satisfying assignments of the predicate. It is natural to also consider
the case when no negations are allowed in the CSP instance, and we derive a
similar complete characterization (under the UGC) there as well.
Finally, we also include some results and examples shedding additional light
on the approximability of certain Max-CSPs
AM with Multiple Merlins
We introduce and study a new model of interactive proofs: AM(k), or
Arthur-Merlin with k non-communicating Merlins. Unlike with the better-known
MIP, here the assumption is that each Merlin receives an independent random
challenge from Arthur. One motivation for this model (which we explore in
detail) comes from the close analogies between it and the quantum complexity
class QMA(k), but the AM(k) model is also natural in its own right.
We illustrate the power of multiple Merlins by giving an AM(2) protocol for
3SAT, in which the Merlins' challenges and responses consist of only
n^{1/2+o(1)} bits each. Our protocol has the consequence that, assuming the
Exponential Time Hypothesis (ETH), any algorithm for approximating a dense CSP
with a polynomial-size alphabet must take n^{(log n)^{1-o(1)}} time. Algorithms
nearly matching this lower bound are known, but their running times had never
been previously explained. Brandao and Harrow have also recently used our 3SAT
protocol to show quasipolynomial hardness for approximating the values of
certain entangled games.
In the other direction, we give a simple quasipolynomial-time approximation
algorithm for free games, and use it to prove that, assuming the ETH, our 3SAT
protocol is essentially optimal. More generally, we show that multiple Merlins
never provide more than a polynomial advantage over one: that is, AM(k)=AM for
all k=poly(n). The key to this result is a subsampling theorem for free games,
which follows from powerful results by Alon et al. and Barak et al. on
subsampling dense CSPs, and which says that the value of any free game can be
closely approximated by the value of a logarithmic-sized random subgame.Comment: 48 page
Sum of squares lower bounds for refuting any CSP
Let be a nontrivial -ary predicate. Consider a
random instance of the constraint satisfaction problem on
variables with constraints, each being applied to randomly
chosen literals. Provided the constraint density satisfies , such
an instance is unsatisfiable with high probability. The \emph{refutation}
problem is to efficiently find a proof of unsatisfiability.
We show that whenever the predicate supports a -\emph{wise uniform}
probability distribution on its satisfying assignments, the sum of squares
(SOS) algorithm of degree
(which runs in time ) \emph{cannot} refute a random instance of
. In particular, the polynomial-time SOS algorithm requires
constraints to refute random instances of
CSP when supports a -wise uniform distribution on its satisfying
assignments. Together with recent work of Lee et al. [LRS15], our result also
implies that \emph{any} polynomial-size semidefinite programming relaxation for
refutation requires at least constraints.
Our results (which also extend with no change to CSPs over larger alphabets)
subsume all previously known lower bounds for semialgebraic refutation of
random CSPs. For every constraint predicate~, they give a three-way hardness
tradeoff between the density of constraints, the SOS degree (hence running
time), and the strength of the refutation. By recent algorithmic results of
Allen et al. [AOW15] and Raghavendra et al. [RRS16], this full three-way
tradeoff is \emph{tight}, up to lower-order factors.Comment: 39 pages, 1 figur
- …