36 research outputs found

    Entropy Samplers and Strong Generic Lower Bounds For Space Bounded Learning

    Get PDF
    With any hypothesis class one can associate a bipartite graph whose vertices are the hypotheses H on one side and all possible labeled examples X on the other side, and an hypothesis is connected to all the labeled examples that are consistent with it. We call this graph the hypotheses graph. We prove that any hypothesis class whose hypotheses graph is mixing cannot be learned using less than Omega(log^2 |H|) memory bits unless the learner uses at least a large number |H|^Omega(1) labeled examples. Our work builds on a combinatorial framework that we suggested in a previous work for proving lower bounds on space bounded learning. The strong lower bound is obtained by defining a new notion of pseudorandomness, the entropy sampler. Raz obtained a similar result using different ideas

    Approximating Dense Max 2-CSPs

    Get PDF
    In this paper, we present a polynomial-time algorithm that approximates sufficiently high-value Max 2-CSPs on sufficiently dense graphs to within O(Nε)O(N^{\varepsilon}) approximation ratio for any constant ε>0\varepsilon > 0. Using this algorithm, we also achieve similar results for free games, projection games on sufficiently dense random graphs, and the Densest kk-Subgraph problem with sufficiently dense optimal solution. Note, however, that algorithms with similar guarantees to the last algorithm were in fact discovered prior to our work by Feige et al. and Suzuki and Tokuyama. In addition, our idea for the above algorithms yields the following by-product: a quasi-polynomial time approximation scheme (QPTAS) for satisfiable dense Max 2-CSPs with better running time than the known algorithms

    Strong Parallel Repetition for Unique Games on Small Set Expanders

    Full text link
    Strong Parallel Repetition for Unique Games on Small Set Expanders The strong parallel repetition problem for unique games is to efficiently reduce the 1-delta vs. 1-C*delta gap problem of Boolean unique games (where C>1 is a sufficiently large constant) to the 1-epsilon vs. epsilon gap problem of unique games over large alphabet. Due to its importance to the Unique Games Conjecture, this problem garnered a great deal of interest from the research community. There are positive results for certain easy unique games (e.g., unique games on expanders), and an impossibility result for hard unique games. In this paper we show how to bypass the impossibility result by enlarging the alphabet sufficiently before repetition. We consider the case of unique games on small set expanders for two setups: (i) Strong small set expanders that yield easy unique games. (ii) Weaker small set expanders underlying possibly hard unique games as long as the game is mildly fortified. We show how to fortify unique games in both cases, i.e., how to transform the game so sufficiently large induced sub-games have bounded value. We then prove strong parallel repetition for the fortified games. Prior to this work fortification was known for projection games but seemed hopeless for unique games

    A No-Go Theorem for Derandomized Parallel Repetition: Beyond Feige-Kilian

    Get PDF
    In this work we show a barrier towards proving a randomness-efficient parallel repetition, a promising avenue for achieving many tight inapproximability results. Feige and Kilian (STOC'95) proved an impossibility result for randomness-efficient parallel repetition for two prover games with small degree, i.e., when each prover has only few possibilities for the question of the other prover. In recent years, there have been indications that randomness-efficient parallel repetition (also called derandomized parallel repetition) might be possible for games with large degree, circumventing the impossibility result of Feige and Kilian. In particular, Dinur and Meir (CCC'11) construct games with large degree whose repetition can be derandomized using a theorem of Impagliazzo, Kabanets and Wigderson (SICOMP'12). However, obtaining derandomized parallel repetition theorems that would yield optimal inapproximability results has remained elusive. This paper presents an explanation for the current impasse in progress, by proving a limitation on derandomized parallel repetition. We formalize two properties which we call "fortification-friendliness" and "yields robust embeddings." We show that any proof of derandomized parallel repetition achieving almost-linear blow-up cannot both (a) be fortification-friendly and (b) yield robust embeddings. Unlike Feige and Kilian, we do not require the small degree assumption. Given that virtually all existing proofs of parallel repetition, including the derandomized parallel repetition result of Dinur and Meir, share these two properties, our no-go theorem highlights a major barrier to achieving almost-linear derandomized parallel repetition

    AM with Multiple Merlins

    Get PDF
    We introduce and study a new model of interactive proofs: AM(k), or Arthur-Merlin with k non-communicating Merlins. Unlike with the better-known MIP, here the assumption is that each Merlin receives an independent random challenge from Arthur. One motivation for this model (which we explore in detail) comes from the close analogies between it and the quantum complexity class QMA(k), but the AM(k) model is also natural in its own right. We illustrate the power of multiple Merlins by giving an AM(2) protocol for 3SAT, in which the Merlins' challenges and responses consist of only n^{1/2+o(1)} bits each. Our protocol has the consequence that, assuming the Exponential Time Hypothesis (ETH), any algorithm for approximating a dense CSP with a polynomial-size alphabet must take n^{(log n)^{1-o(1)}} time. Algorithms nearly matching this lower bound are known, but their running times had never been previously explained. Brandao and Harrow have also recently used our 3SAT protocol to show quasipolynomial hardness for approximating the values of certain entangled games. In the other direction, we give a simple quasipolynomial-time approximation algorithm for free games, and use it to prove that, assuming the ETH, our 3SAT protocol is essentially optimal. More generally, we show that multiple Merlins never provide more than a polynomial advantage over one: that is, AM(k)=AM for all k=poly(n). The key to this result is a subsampling theorem for free games, which follows from powerful results by Alon et al. and Barak et al. on subsampling dense CSPs, and which says that the value of any free game can be closely approximated by the value of a logarithmic-sized random subgame.Comment: 48 page

    Parallel Repetition From Fortification

    Get PDF
    The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game in terms of the value of the base game and the number of repetitions. In this work we give a simple transformation on games – “fortification” – and show that for fortified games, the value of the repeated game decreases perfectly exponentially with the number of repetitions, up to an arbitrarily small additive error. Our proof is combinatorial and short. As corollaries, we obtain: (1) Starting from a PCP Theorem with soundness error bounded away from 1, we get a PCP with arbitrarily small constant soundness error. In particular, starting with the combinatorial PCP of Dinur, we get a combinatorial PCP with low error. The latter can be used for hardness of approximation as in the work of Hastad. (2) Starting from the work of the author and Raz, we get a projection PCP theorem with the smallest soundness error known today. The theorem yields nearly a quadratic improvement in the size compared to previous work. We then discuss the problem of derandomizing parallel repetition, and the limitations of the fortification idea in this setting. We point out a connection between the problem of derandomizing parallel repetition and the problem of composition. This connection could shed light on the so-called Projection Games Conjecture, which asks for projection PCP with minimal error.National Science Foundation (U.S.) (Grant 1218547

    The Projection Games Conjecture and the NP-Hardness of ln n-Approximating Set-Cover

    Get PDF
    We suggest the research agenda of establishing new hardness of approximation results based on the “projection games conjecture”, i.e., an instantiation of the Sliding Scale Conjecture of Bellare, Goldwasser, Lund and Russell to projection games. We pursue this line of research by establishing a tight NP-hardness result for the Set-Cover problem. Specifically, we show that under the projection games conjecture (in fact, under a quantitative version of the conjecture that is only slightly beyond the reach of current techniques), it is NP-hard to approximate Set-Cover on instances of size N to within (1 − α)ln N for arbitrarily small α > 0. Our reduction establishes a tight trade-off between the approximation accuracy α and the time required for the approximation 2[superscript NΩ(α)], assuming Sat requires exponential time. The reduction is obtained by modifying Feige’s reduction. The latter only provides a lower bound of 2[superscript NΩ(α/loglogN)] on the time required for (1 − α)ln N-approximating Set-Cover assuming Sat requires exponential time (note that N[superscript 1/loglogN] = N[superscript o(1)]). The modification uses a combinatorial construction of a bipartite graph in which any coloring of the first side that does not use a color for more than a small fraction of the vertices, makes most vertices on the other side have their neighbors all colored in different colors

    Tighter MA/1 Circuit Lower Bounds from Verifier Efficient PCPs for PSPACE

    Get PDF
    corecore