268 research outputs found

    Approximation Resistant Predicates From Pairwise Independence

    Full text link
    We study the approximability of predicates on kk variables from a domain [q][q], and give a new sufficient condition for such predicates to be approximation resistant under the Unique Games Conjecture. Specifically, we show that a predicate PP is approximation resistant if there exists a balanced pairwise independent distribution over [q]k[q]^k whose support is contained in the set of satisfying assignments to PP

    On the Usefulness of Predicates

    Full text link
    Motivated by the pervasiveness of strong inapproximability results for Max-CSPs, we introduce a relaxed notion of an approximate solution of a Max-CSP. In this relaxed version, loosely speaking, the algorithm is allowed to replace the constraints of an instance by some other (possibly real-valued) constraints, and then only needs to satisfy as many of the new constraints as possible. To be more precise, we introduce the following notion of a predicate PP being \emph{useful} for a (real-valued) objective QQ: given an almost satisfiable Max-PP instance, there is an algorithm that beats a random assignment on the corresponding Max-QQ instance applied to the same sets of literals. The standard notion of a nontrivial approximation algorithm for a Max-CSP with predicate PP is exactly the same as saying that PP is useful for PP itself. We say that PP is useless if it is not useful for any QQ. This turns out to be equivalent to the following pseudo-randomness property: given an almost satisfiable instance of Max-PP it is hard to find an assignment such that the induced distribution on kk-bit strings defined by the instance is not essentially uniform. Under the Unique Games Conjecture, we give a complete and simple characterization of useful Max-CSPs defined by a predicate: such a Max-CSP is useless if and only if there is a pairwise independent distribution supported on the satisfying assignments of the predicate. It is natural to also consider the case when no negations are allowed in the CSP instance, and we derive a similar complete characterization (under the UGC) there as well. Finally, we also include some results and examples shedding additional light on the approximability of certain Max-CSPs

    A Characterization of Approximation Resistance for Even kk-Partite CSPs

    Full text link
    A constraint satisfaction problem (CSP) is said to be \emph{approximation resistant} if it is hard to approximate better than the trivial algorithm which picks a uniformly random assignment. Assuming the Unique Games Conjecture, we give a characterization of approximation resistance for kk-partite CSPs defined by an even predicate

    On the NP-Hardness of Approximating Ordering Constraint Satisfaction Problems

    Full text link
    We show improved NP-hardness of approximating Ordering Constraint Satisfaction Problems (OCSPs). For the two most well-studied OCSPs, Maximum Acyclic Subgraph and Maximum Betweenness, we prove inapproximability of 14/15+ϵ14/15+\epsilon and 1/2+ϵ1/2+\epsilon. An OCSP is said to be approximation resistant if it is hard to approximate better than taking a uniformly random ordering. We prove that the Maximum Non-Betweenness Problem is approximation resistant and that there are width-mm approximation-resistant OCSPs accepting only a fraction 1/(m/2)!1 / (m/2)! of assignments. These results provide the first examples of approximation-resistant OCSPs subject only to P ≠\neq \NP

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page
    • …
    corecore