research

Near-Optimal UGC-hardness of Approximating Max k-CSP_R

Abstract

In this paper, we prove an almost-optimal hardness for Max kk-CSPR_R based on Khot's Unique Games Conjecture (UGC). In Max kk-CSPR_R, we are given a set of predicates each of which depends on exactly kk variables. Each variable can take any value from 1,2,,R1, 2, \dots, R. The goal is to find an assignment to variables that maximizes the number of satisfied predicates. Assuming the Unique Games Conjecture, we show that it is NP-hard to approximate Max kk-CSPR_R to within factor 2O(klogk)(logR)k/2/Rk12^{O(k \log k)}(\log R)^{k/2}/R^{k - 1} for any k,Rk, R. To the best of our knowledge, this result improves on all the known hardness of approximation results when 3k=o(logR/loglogR)3 \leq k = o(\log R/\log \log R). In this case, the previous best hardness result was NP-hardness of approximating within a factor O(k/Rk2)O(k/R^{k-2}) by Chan. When k=2k = 2, our result matches the best known UGC-hardness result of Khot, Kindler, Mossel and O'Donnell. In addition, by extending an algorithm for Max 2-CSPR_R by Kindler, Kolla and Trevisan, we provide an Ω(logR/Rk1)\Omega(\log R/R^{k - 1})-approximation algorithm for Max kk-CSPR_R. This algorithm implies that our inapproximability result is tight up to a factor of 2O(klogk)(logR)k/212^{O(k \log k)}(\log R)^{k/2 - 1}. In comparison, when 3k3 \leq k is a constant, the previously known gap was O(R)O(R), which is significantly larger than our gap of O(polylog R)O(\text{polylog } R). Finally, we show that we can replace the Unique Games Conjecture assumption with Khot's dd-to-1 Conjecture and still get asymptotically the same hardness of approximation

    Similar works