69 research outputs found

    A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs

    Get PDF
    A (k×l)(k \times l)-birthday repetition Gk×l\mathcal{G}^{k \times l} of a two-prover game G\mathcal{G} is a game in which the two provers are sent random sets of questions from G\mathcal{G} of sizes kk and ll respectively. These two sets are sampled independently uniformly among all sets of questions of those particular sizes. We prove the following birthday repetition theorem: when G\mathcal{G} satisfies some mild conditions, val(Gk×l)val(\mathcal{G}^{k \times l}) decreases exponentially in Ω(kl/n)\Omega(kl/n) where nn is the total number of questions. Our result positively resolves an open question posted by Aaronson, Impagliazzo and Moshkovitz (CCC 2014). As an application of our birthday repetition theorem, we obtain new fine-grained hardness of approximation results for dense CSPs. Specifically, we establish a tight trade-off between running time and approximation ratio for dense CSPs by showing conditional lower bounds, integrality gaps and approximation algorithms. In particular, for any sufficiently large ii and for every k≥2k \geq 2, we show the following results: - We exhibit an O(q1/i)O(q^{1/i})-approximation algorithm for dense Max kk-CSPs with alphabet size qq via Ok(i)O_k(i)-level of Sherali-Adams relaxation. - Through our birthday repetition theorem, we obtain an integrality gap of q1/iq^{1/i} for Ω~k(i)\tilde\Omega_k(i)-level Lasserre relaxation for fully-dense Max kk-CSP. - Assuming that there is a constant ϵ>0\epsilon > 0 such that Max 3SAT cannot be approximated to within (1−ϵ)(1-\epsilon) of the optimal in sub-exponential time, our birthday repetition theorem implies that any algorithm that approximates fully-dense Max kk-CSP to within a q1/iq^{1/i} factor takes (nq)Ω~k(i)(nq)^{\tilde \Omega_k(i)} time, almost tightly matching the algorithmic result based on Sherali-Adams relaxation.Comment: 45 page

    AM with Multiple Merlins

    Get PDF
    We introduce and study a new model of interactive proofs: AM(k), or Arthur-Merlin with k non-communicating Merlins. Unlike with the better-known MIP, here the assumption is that each Merlin receives an independent random challenge from Arthur. One motivation for this model (which we explore in detail) comes from the close analogies between it and the quantum complexity class QMA(k), but the AM(k) model is also natural in its own right. We illustrate the power of multiple Merlins by giving an AM(2) protocol for 3SAT, in which the Merlins' challenges and responses consist of only n^{1/2+o(1)} bits each. Our protocol has the consequence that, assuming the Exponential Time Hypothesis (ETH), any algorithm for approximating a dense CSP with a polynomial-size alphabet must take n^{(log n)^{1-o(1)}} time. Algorithms nearly matching this lower bound are known, but their running times had never been previously explained. Brandao and Harrow have also recently used our 3SAT protocol to show quasipolynomial hardness for approximating the values of certain entangled games. In the other direction, we give a simple quasipolynomial-time approximation algorithm for free games, and use it to prove that, assuming the ETH, our 3SAT protocol is essentially optimal. More generally, we show that multiple Merlins never provide more than a polynomial advantage over one: that is, AM(k)=AM for all k=poly(n). The key to this result is a subsampling theorem for free games, which follows from powerful results by Alon et al. and Barak et al. on subsampling dense CSPs, and which says that the value of any free game can be closely approximated by the value of a logarithmic-sized random subgame.Comment: 48 page

    From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More

    Full text link
    We consider questions that arise from the intersection between the areas of polynomial-time approximation algorithms, subexponential-time algorithms, and fixed-parameter tractable algorithms. The questions, which have been asked several times (e.g., [Marx08, FGMS12, DF13]), are whether there is a non-trivial FPT-approximation algorithm for the Maximum Clique (Clique) and Minimum Dominating Set (DomSet) problems parameterized by the size of the optimal solution. In particular, letting OPT\text{OPT} be the optimum and NN be the size of the input, is there an algorithm that runs in t(OPT)poly(N)t(\text{OPT})\text{poly}(N) time and outputs a solution of size f(OPT)f(\text{OPT}), for any functions tt and ff that are independent of NN (for Clique, we want f(OPT)=ω(1)f(\text{OPT})=\omega(1))? In this paper, we show that both Clique and DomSet admit no non-trivial FPT-approximation algorithm, i.e., there is no o(OPT)o(\text{OPT})-FPT-approximation algorithm for Clique and no f(OPT)f(\text{OPT})-FPT-approximation algorithm for DomSet, for any function ff (e.g., this holds even if ff is the Ackermann function). In fact, our results imply something even stronger: The best way to solve Clique and DomSet, even approximately, is to essentially enumerate all possibilities. Our results hold under the Gap Exponential Time Hypothesis (Gap-ETH) [Dinur16, MR16], which states that no 2o(n)2^{o(n)}-time algorithm can distinguish between a satisfiable 3SAT formula and one which is not even (1−ϵ)(1 - \epsilon)-satisfiable for some constant ϵ>0\epsilon > 0. Besides Clique and DomSet, we also rule out non-trivial FPT-approximation for Maximum Balanced Biclique, Maximum Subgraphs with Hereditary Properties, and Maximum Induced Matching in bipartite graphs. Additionally, we rule out ko(1)k^{o(1)}-FPT-approximation algorithm for Densest kk-Subgraph although this ratio does not yet match the trivial O(k)O(k)-approximation algorithm.Comment: 43 pages. To appear in FOCS'1

    ETH-Hardness of Approximating 2-CSPs and Directed Steiner Network

    Get PDF
    We study the 2-ary constraint satisfaction problems (2-CSPs), which can be stated as follows: given a constraint graph G=(V,E)G=(V,E), an alphabet set Σ\Sigma and, for each {u,v}∈E\{u, v\}\in E, a constraint Cuv⊆Σ×ΣC_{uv} \subseteq \Sigma\times\Sigma, the goal is to find an assignment σ:V→Σ\sigma: V \to \Sigma that satisfies as many constraints as possible, where a constraint CuvC_{uv} is satisfied if (σ(u),σ(v))∈Cuv(\sigma(u),\sigma(v))\in C_{uv}. While the approximability of 2-CSPs is quite well understood when ∣Σ∣|\Sigma| is constant, many problems are still open when ∣Σ∣|\Sigma| becomes super constant. One such problem is whether it is hard to approximate 2-CSPs to within a polynomial factor of ∣Σ∣∣V∣|\Sigma| |V|. Bellare et al. (1993) suggested that the answer to this question might be positive. Alas, despite efforts to resolve this conjecture, it remains open to this day. In this work, we separate ∣V∣|V| and ∣Σ∣|\Sigma| and ask a related but weaker question: is it hard to approximate 2-CSPs to within a polynomial factor of ∣V∣|V| (while ∣Σ∣|\Sigma| may be super-polynomial in ∣V∣|V|)? Assuming the exponential time hypothesis (ETH), we answer this question positively by showing that no polynomial time algorithm can approximate 2-CSPs to within a factor of ∣V∣1−o(1)|V|^{1 - o(1)}. Note that our ratio is almost linear, which is almost optimal as a trivial algorithm gives a ∣V∣|V|-approximation for 2-CSPs. Thanks to a known reduction, our result implies an ETH-hardness of approximating Directed Steiner Network with ratio k1/4−o(1)k^{1/4 - o(1)} where kk is the number of demand pairs. The ratio is roughly the square root of the best known ratio achieved by polynomial time algorithms (Chekuri et al., 2011; Feldman et al., 2012). Additionally, under Gap-ETH, our reduction for 2-CSPs not only rules out polynomial time algorithms, but also FPT algorithms parameterized by ∣V∣|V|. Similar statement applies for DSN parameterized by kk.Comment: 36 pages. A preliminary version appeared in ITCS'1

    Inapproximability of Maximum Biclique Problems, Minimum kk-Cut and Densest At-Least-kk-Subgraph from the Small Set Expansion Hypothesis

    Full text link
    The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly states that it is NP-hard to distinguish between a graph with a small subset of vertices whose edge expansion is almost zero and one in which all small subsets of vertices have expansion almost one. In this work, we prove inapproximability results for the following graph problems based on this hypothesis: - Maximum Edge Biclique (MEB): given a bipartite graph GG, find a complete bipartite subgraph of GG with maximum number of edges. - Maximum Balanced Biclique (MBB): given a bipartite graph GG, find a balanced complete bipartite subgraph of GG with maximum number of vertices. - Minimum kk-Cut: given a weighted graph GG, find a set of edges with minimum total weight whose removal partitions GG into kk connected components. - Densest At-Least-kk-Subgraph (DALkkS): given a weighted graph GG, find a set SS of at least kk vertices such that the induced subgraph on SS has maximum density (the ratio between the total weight of edges and the number of vertices). We show that, assuming SSEH and NP ⊈\nsubseteq BPP, no polynomial time algorithm gives n1−εn^{1 - \varepsilon}-approximation for MEB or MBB for every constant ε>0\varepsilon > 0. Moreover, assuming SSEH, we show that it is NP-hard to approximate Minimum kk-Cut and DALkkS to within (2−ε)(2 - \varepsilon) factor of the optimum for every constant ε>0\varepsilon > 0. The ratios in our results are essentially tight since trivial algorithms give nn-approximation to both MEB and MBB and efficient 22-approximation algorithms are known for Minimum kk-Cut [SV95] and DALkkS [And07, KS09]. Our first result is proved by combining a technique developed by Raghavendra et al. [RST12] to avoid locality of gadget reductions with a generalization of Bansal and Khot's long code test [BK09] whereas our second result is shown via elementary reductions.Comment: A preliminary version of this work will appear at ICALP 2017 under a different title "Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis

    New Tools and Connections for Exponential-Time Approximation

    Get PDF
    In this paper, we develop new tools and connections for exponential time approximation. In this setting, we are given a problem instance and an integer r>1, and the goal is to design an approximation algorithm with the fastest possible running time. We give randomized algorithms that establish an approximation ratio of 1. r for maximum independent set in O∗(exp(O~(n/rlog2r+rlog2r))) time, 2. r for chromatic number in O∗(exp(O~(n/rlogr+rlog2r))) time, 3. (2−1/r) for minimum vertex cover in O∗(exp(n/rΩ(r))) time, and 4. (k−1/r) for minimum k-hypergraph vertex cover in O∗(exp(n/(kr)Ω(kr))) time. (Throughout, O~ and O∗ omit polyloglog(r) and factors polynomial in the input size, respectively.) The best known time bounds for all problems were O∗(2n/r) (Bourgeois et al. in Discret Appl Math 159(17):1954–1970, 2011; Cygan et al. in Exponential-time approximation of hard problems, 2008). For maximum independent set and chromatic number, these bounds were complemented by exp(n1−o(1)/r1+o(1)) lower bounds (under the Exponential Time Hypothesis (ETH)) (Chalermsook et al. in Foundations of computer science, FOCS, pp. 370–379, 2013; Laekhanukit in Inapproximability of combinatorial problems in subexponential-time. Ph.D. thesis, 2014). Our results show that the naturally-looking O∗(2n/r) bounds are not tight for all these problems. The key to these results is a sparsification procedure that reduces a problem to a bounded-degree variant, allowing the use of approximation algorithms for bounded-degree graphs. To obtain the first two results, we introduce a new randomized branching rule. Finally, we show a connection between PCP parameters and exponential-time approximation algorithms. This connection together with our independent set algorithm refute the possibility to overly reduce the size of Chan’s PCP (Chan in J. ACM 63(3):27:1–27:32, 2016). It also implies that a (significant) improvement over our result will refute the gap-ETH conjecture (Dinur in Electron Colloq Comput Complex (ECCC) 23:128, 2016; Manurangsi and Raghavendra in A birthday repetition theorem and complexity of approximating dense CSPs, 2016)

    Imperfect Gaps in Gap-ETH and PCPs

    Get PDF
    We study the role of perfect completeness in probabilistically checkable proof systems (PCPs) and give a way to transform a PCP with imperfect completeness to one with perfect completeness, when the initial gap is a constant. We show that PCP_{c,s}[r,q] subseteq PCP_{1,s\u27}[r+O(1),q+O(r)] for c-s=Omega(1) which in turn implies that one can convert imperfect completeness to perfect in linear-sized PCPs for NP with a O(log n) additive loss in the query complexity q. We show our result by constructing a "robust circuit" using threshold gates. These results are a gap amplification procedure for PCPs, (when completeness is not 1) analogous to questions studied in parallel repetition [Anup Rao, 2011] and pseudorandomness [David Gillman, 1998] and might be of independent interest. We also investigate the time-complexity of approximating perfectly satisfiable instances of 3SAT versus those with imperfect completeness. We show that the Gap-ETH conjecture without perfect completeness is equivalent to Gap-ETH with perfect completeness, i.e. MAX 3SAT(1-epsilon,1-delta), delta > epsilon has 2^{o(n)} algorithms if and only if MAX 3SAT(1,1-delta) has 2^{o(n)} algorithms. We also relate the time complexities of these two problems in a more fine-grained way to show that T_2(n) <= T_1(n(log log n)^{O(1)}), where T_1(n),T_2(n) denote the randomized time-complexity of approximating MAX 3SAT with perfect and imperfect completeness respectively

    Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis

    Get PDF
    The Small Set Expansion Hypothesis (SSEH) is a conjecture which roughly states that it is NP-hard to distinguish between a graph with a small set of vertices whose expansion is almost zero and one in which all small sets of vertices have expansion almost one. In this work, we prove conditional inapproximability results for the following graph problems based on this hypothesis: - Maximum Edge Biclique (MEB): given a bipartite graph G, find a complete bipartite subgraph of G with maximum number of edges. We show that, assuming SSEH and that NP != BPP, no polynomial time algorithm gives n^{1 - epsilon}-approximation for MEB for every constant epsilon > 0. - Maximum Balanced Biclique (MBB): given a bipartite graph G, find a balanced complete bipartite subgraph of G with maximum number of vertices. Similar to MEB, we prove n^{1 - epsilon} ratio inapproximability for MBB for every epsilon > 0, assuming SSEH and that NP != BPP. - Minimum k-Cut: given a weighted graph G, find a set of edges with minimum total weight whose removal splits the graph into k components. We prove that this problem is NP-hard to approximate to within (2 - epsilon) factor of the optimum for every epsilon > 0, assuming SSEH. The ratios in our results are essentially tight since trivial algorithms give n-approximation to both MEB and MBB and 2-approximation algorithms are known for Minimum k-Cut [Saran and Vazirani, SIAM J. Comput., 1995]. Our first two results are proved by combining a technique developed by Raghavendra, Steurer and Tulsiani [Raghavendra et al., CCC, 2012] to avoid locality of gadget reductions with a generalization of Bansal and Khot\u27s long code test [Bansal and Khot, FOCS, 2009] whereas our last result is shown via an elementary reduction

    Tight Hardness Results for Training Depth-2 ReLU Networks

    Get PDF
    We prove several hardness results for training depth-2 neural networks with the ReLU activation function; these networks are simply weighted sums (that may include negative coefficients) of ReLUs. Our goal is to output a depth-2 neural network that minimizes the square loss with respect to a given training set. We prove that this problem is NP-hard already for a network with a single ReLU. We also prove NP-hardness for outputting a weighted sum of kk ReLUs minimizing the squared error (for k>1k>1) even in the realizable setting (i.e., when the labels are consistent with an unknown depth-2 ReLU network). We are also able to obtain lower bounds on the running time in terms of the desired additive error ϵ\epsilon. To obtain our lower bounds, we use the Gap Exponential Time Hypothesis (Gap-ETH) as well as a new hypothesis regarding the hardness of approximating the well known Densest κ\kappa-Subgraph problem in subexponential time (these hypotheses are used separately in proving different lower bounds). For example, we prove that under reasonable hardness assumptions, any proper learning algorithm for finding the best fitting ReLU must run in time exponential in 1/ϵ21/\epsilon^2. Together with a previous work regarding improperly learning a ReLU (Goel et al., COLT'17), this implies the first separation between proper and improper algorithms for learning a ReLU. We also study the problem of properly learning a depth-2 network of ReLUs with bounded weights giving new (worst-case) upper bounds on the running time needed to learn such networks both in the realizable and agnostic settings. Our upper bounds on the running time essentially matches our lower bounds in terms of the dependency on ϵ\epsilon.Comment: To appear in ITCS'2
    • …
    corecore