330 research outputs found

    Robustly Solvable Constraint Satisfaction Problems

    Full text link
    An algorithm for a constraint satisfaction problem is called robust if it outputs an assignment satisfying at least (1g(ε))(1-g(\varepsilon))-fraction of the constraints given a (1ε)(1-\varepsilon)-satisfiable instance, where g(ε)0g(\varepsilon) \rightarrow 0 as ε0\varepsilon \rightarrow 0. Guruswami and Zhou conjectured a characterization of constraint languages for which the corresponding constraint satisfaction problem admits an efficient robust algorithm. This paper confirms their conjecture

    Galois correspondence for counting quantifiers

    Full text link
    We introduce a new type of closure operator on the set of relations, max-implementation, and its weaker analog max-quantification. Then we show that approximation preserving reductions between counting constraint satisfaction problems (#CSPs) are preserved by these two types of closure operators. Together with some previous results this means that the approximation complexity of counting CSPs is determined by partial clones of relations that additionally closed under these new types of closure operators. Galois correspondence of various kind have proved to be quite helpful in the study of the complexity of the CSP. While we were unable to identify a Galois correspondence for partial clones closed under max-implementation and max-quantification, we obtain such results for slightly different type of closure operators, k-existential quantification. This type of quantifiers are known as counting quantifiers in model theory, and often used to enhance first order logic languages. We characterize partial clones of relations closed under k-existential quantification as sets of relations invariant under a set of partial functions that satisfy the condition of k-subset surjectivity. Finally, we give a description of Boolean max-co-clones, that is, sets of relations on {0,1} closed under max-implementations.Comment: 28 pages, 2 figure

    Entropy landscape and non-Gibbs solutions in constraint satisfaction problems

    Full text link
    We study the entropy landscape of solutions for the bicoloring problem in random graphs, a representative difficult constraint satisfaction problem. Our goal is to classify which type of clusters of solutions are addressed by different algorithms. In the first part of the study we use the cavity method to obtain the number of clusters with a given internal entropy and determine the phase diagram of the problem, e.g. dynamical, rigidity and SAT-UNSAT transitions. In the second part of the paper we analyze different algorithms and locate their behavior in the entropy landscape of the problem. For instance we show that a smoothed version of a decimation strategy based on Belief Propagation is able to find solutions belonging to sub-dominant clusters even beyond the so called rigidity transition where the thermodynamically relevant clusters become frozen. These non-equilibrium solutions belong to the most probable unfrozen clusters.Comment: 38 pages, 10 figure

    Sketching Cuts in Graphs and Hypergraphs

    Full text link
    Sketching and streaming algorithms are in the forefront of current research directions for cut problems in graphs. In the streaming model, we show that (1ϵ)(1-\epsilon)-approximation for Max-Cut must use n1O(ϵ)n^{1-O(\epsilon)} space; moreover, beating 4/54/5-approximation requires polynomial space. For the sketching model, we show that rr-uniform hypergraphs admit a (1+ϵ)(1+\epsilon)-cut-sparsifier (i.e., a weighted subhypergraph that approximately preserves all the cuts) with O(ϵ2n(r+logn))O(\epsilon^{-2} n (r+\log n)) edges. We also make first steps towards sketching general CSPs (Constraint Satisfaction Problems)

    On the Usefulness of Predicates

    Full text link
    Motivated by the pervasiveness of strong inapproximability results for Max-CSPs, we introduce a relaxed notion of an approximate solution of a Max-CSP. In this relaxed version, loosely speaking, the algorithm is allowed to replace the constraints of an instance by some other (possibly real-valued) constraints, and then only needs to satisfy as many of the new constraints as possible. To be more precise, we introduce the following notion of a predicate PP being \emph{useful} for a (real-valued) objective QQ: given an almost satisfiable Max-PP instance, there is an algorithm that beats a random assignment on the corresponding Max-QQ instance applied to the same sets of literals. The standard notion of a nontrivial approximation algorithm for a Max-CSP with predicate PP is exactly the same as saying that PP is useful for PP itself. We say that PP is useless if it is not useful for any QQ. This turns out to be equivalent to the following pseudo-randomness property: given an almost satisfiable instance of Max-PP it is hard to find an assignment such that the induced distribution on kk-bit strings defined by the instance is not essentially uniform. Under the Unique Games Conjecture, we give a complete and simple characterization of useful Max-CSPs defined by a predicate: such a Max-CSP is useless if and only if there is a pairwise independent distribution supported on the satisfying assignments of the predicate. It is natural to also consider the case when no negations are allowed in the CSP instance, and we derive a similar complete characterization (under the UGC) there as well. Finally, we also include some results and examples shedding additional light on the approximability of certain Max-CSPs

    AM with Multiple Merlins

    Get PDF
    We introduce and study a new model of interactive proofs: AM(k), or Arthur-Merlin with k non-communicating Merlins. Unlike with the better-known MIP, here the assumption is that each Merlin receives an independent random challenge from Arthur. One motivation for this model (which we explore in detail) comes from the close analogies between it and the quantum complexity class QMA(k), but the AM(k) model is also natural in its own right. We illustrate the power of multiple Merlins by giving an AM(2) protocol for 3SAT, in which the Merlins' challenges and responses consist of only n^{1/2+o(1)} bits each. Our protocol has the consequence that, assuming the Exponential Time Hypothesis (ETH), any algorithm for approximating a dense CSP with a polynomial-size alphabet must take n^{(log n)^{1-o(1)}} time. Algorithms nearly matching this lower bound are known, but their running times had never been previously explained. Brandao and Harrow have also recently used our 3SAT protocol to show quasipolynomial hardness for approximating the values of certain entangled games. In the other direction, we give a simple quasipolynomial-time approximation algorithm for free games, and use it to prove that, assuming the ETH, our 3SAT protocol is essentially optimal. More generally, we show that multiple Merlins never provide more than a polynomial advantage over one: that is, AM(k)=AM for all k=poly(n). The key to this result is a subsampling theorem for free games, which follows from powerful results by Alon et al. and Barak et al. on subsampling dense CSPs, and which says that the value of any free game can be closely approximated by the value of a logarithmic-sized random subgame.Comment: 48 page

    Sum of squares lower bounds for refuting any CSP

    Full text link
    Let P:{0,1}k{0,1}P:\{0,1\}^k \to \{0,1\} be a nontrivial kk-ary predicate. Consider a random instance of the constraint satisfaction problem CSP(P)\mathrm{CSP}(P) on nn variables with Δn\Delta n constraints, each being PP applied to kk randomly chosen literals. Provided the constraint density satisfies Δ1\Delta \gg 1, such an instance is unsatisfiable with high probability. The \emph{refutation} problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate PP supports a tt-\emph{wise uniform} probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree d=Θ(nΔ2/(t1)logΔ)d = \Theta(\frac{n}{\Delta^{2/(t-1)} \log \Delta}) (which runs in time nO(d)n^{O(d)}) \emph{cannot} refute a random instance of CSP(P)\mathrm{CSP}(P). In particular, the polynomial-time SOS algorithm requires Ω~(n(t+1)/2)\widetilde{\Omega}(n^{(t+1)/2}) constraints to refute random instances of CSP(P)(P) when PP supports a tt-wise uniform distribution on its satisfying assignments. Together with recent work of Lee et al. [LRS15], our result also implies that \emph{any} polynomial-size semidefinite programming relaxation for refutation requires at least Ω~(n(t+1)/2)\widetilde{\Omega}(n^{(t+1)/2}) constraints. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate~PP, they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of Allen et al. [AOW15] and Raghavendra et al. [RRS16], this full three-way tradeoff is \emph{tight}, up to lower-order factors.Comment: 39 pages, 1 figur
    corecore