111 research outputs found

    The Quantum PCP Conjecture

    Full text link
    The classical PCP theorem is arguably the most important achievement of classical complexity theory in the past quarter century. In recent years, researchers in quantum computational complexity have tried to identify approaches and develop tools that address the question: does a quantum version of the PCP theorem hold? The story of this study starts with classical complexity and takes unexpected turns providing fascinating vistas on the foundations of quantum mechanics, the global nature of entanglement and its topological properties, quantum error correction, information theory, and much more; it raises questions that touch upon some of the most fundamental issues at the heart of our understanding of quantum mechanics. At this point, the jury is still out as to whether or not such a theorem holds. This survey aims to provide a snapshot of the status in this ongoing story, tailored to a general theory-of-CS audience.Comment: 45 pages, 4 figures, an enhanced version of the SIGACT guest column from Volume 44 Issue 2, June 201

    An exponential separation between MA and AM proofs of proximity

    Get PDF
    Interactive proofs of proximity allow a sublinear-time verifier to check that a given input is close to the language, using a small amount of communication with a powerful (but untrusted) prover. In this work we consider two natural minimally interactive variants of such proofs systems, in which the prover only sends a single message, referred to as the proof. The first variant, known as MA-proofs of Proximity (MAP), is fully non-interactive, meaning that the proof is a function of the input only. The second variant, known as AM-proofs of Proximity (AMP), allows the proof to additionally depend on the verifier's (entire) random string. The complexity of both MAPs and AMPs is the total number of bits that the verifier observes - namely, the sum of the proof length and query complexity. Our main result is an exponential separation between the power of MAPs and AMPs. Specifically, we exhibit an explicit and natural property Pi that admits an AMP with complexity O(log n), whereas any MAP for Pi has complexity Omega~(n^{1/4}), where n denotes the length of the input in bits. Our MAP lower bound also yields an alternate proof, which is more general and arguably much simpler, for a recent result of Fischer et al. (ITCS, 2014). Lastly, we also consider the notion of oblivious proofs of proximity, in which the verifier's queries are oblivious to the proof. In this setting we show that AMPs can only be quadratically stronger than MAPs. As an application of this result, we show an exponential separation between the power of public and private coin for oblivious interactive proofs of proximity

    Proofs of Proximity for Distribution Testing

    Get PDF
    Distribution testing is an area of property testing that studies algorithms that receive few samples from a probability distribution D and decide whether D has a certain property or is far (in total variation distance) from all distributions with that property. Most natural properties of distributions, however, require a large number of samples to test, which motivates the question of whether there are natural settings wherein fewer samples suffice. We initiate a study of proofs of proximity for properties of distributions. In their basic form, these proof systems consist of a tester that not only has sample access to a distribution but also explicit access to a proof string that depends on the distribution. We refer to these as NP distribution testers, or MA distribution testers if the tester is a probabilistic algorithm. We also study the more general notion of IP distribution testers, in which the tester interacts with an all-powerful untrusted prover. We investigate the power and limitations of proofs of proximity for distributions and chart a landscape that, surprisingly, is significantly different from that of proofs of proximity for functions. Our main results include showing that MA distribution testers can be quadratically stronger than standard distribution testers, but no stronger than that; in contrast, IP distribution testers can be exponentially stronger than standard distribution testers, but when restricted to public coins they can be at best quadratically stronger

    Guest Column: The Quantum PCP Conjecture

    Get PDF
    The classical PCP theorem is arguably the most important achievement of classical complexity theory in the past quarter century. In recent years, researchers in quantum computational complexity have tried to identify approaches and develop tools that address the question: does a quantum version of the PCP theorem hold? The story of this study starts with classical complexity and takes unexpected turns providing fascinating vistas on the foundations of quantum mechanics and multipartite entanglement, topology and the so-called phenomenon of topological order, quantum error correction, information theory, and much more; it raises questions that touch upon some of the most fundamental issues at the heart of our understanding of quantum mechanics. At this point, the jury is still out as to whether or not such a theorem holds. This survey aims to provide a snapshot of the status in this ongoing story, tailored to a general theory-of-CS audience

    Relaxed Locally Correctable Codes

    Get PDF
    Locally decodable codes (LDCs) and locally correctable codes (LCCs) are error-correcting codes in which individual bits of the message and codeword, respectively, can be recovered by querying only few bits from a noisy codeword. These codes have found numerous applications both in theory and in practice. A natural relaxation of LDCs, introduced by Ben-Sasson et al. (SICOMP, 2006), allows the decoder to reject (i.e., refuse to answer) in case it detects that the codeword is corrupt. They call such a decoder a relaxed decoder and construct a constant-query relaxed LDC with almost-linear blocklength, which is sub-exponentially better than what is known for (full-fledged) LDCs in the constant-query regime. We consider an analogous relaxation for local correction. Thus, a relaxed local corrector reads only few bits from a (possibly) corrupt codeword and either recovers the desired bit of the codeword, or rejects in case it detects a corruption. We give two constructions of relaxed LCCs in two regimes, where the first optimizes the query complexity and the second optimizes the rate: 1. Constant Query Complexity: A relaxed LCC with polynomial blocklength whose corrector only reads a constant number of bits of the codeword. This is a sub-exponential improvement over the best constant query (full-fledged) LCCs that are known. 2. Constant Rate: A relaxed LCC with constant rate (i.e., linear blocklength) with quasi-polylogarithmic query complexity. This is a nearly sub-exponential improvement over the query complexity of a recent (full-fledged) constant-rate LCC of Kopparty et al. (STOC, 2016)

    A Hierarchy Theorem for Interactive Proofs of Proximity

    Get PDF
    The number of rounds, or round complexity, used in an interactive protocol is a fundamental resource. In this work we consider the significance of round complexity in the context of Interactive Proofs of Proximity (IPPs). Roughly speaking, IPPs are interactive proofs in which the verifier runs in sublinear time and is only required to reject inputs that are far from the language. Our main result is a round hierarchy theorem for IPPs, showing that the power of IPPs grows with the number of rounds. More specifically, we show that there exists a gap function g(r) = Theta(r^2) such that for every constant r geq 1 there exists a language that (1) has a g(r)-round IPP with verification time t=t(n,r) but (2) does not have an r-round IPP with verification time t (or even verification time t\u27=poly(t)). In fact, we prove a stronger result by exhibiting a single language L such that, for every constant r geq 1, there is an O(r^2)-round IPP for L with t=n^{O(1/r)} verification time, whereas the verifier in any r-round IPP for L must run in time at least t^{100}. Moreover, we show an IPP for L with a poly-logarithmic number of rounds and only poly-logarithmic erification time, yielding a sub-exponential separation between the power of constant-round IPPs versus general (unbounded round) IPPs. From our hierarchy theorem we also derive implications to standard interactive proofs (in which the verifier can run in polynomial time). Specifically, we show that the round reduction technique of Babai and Moran (JCSS, 1988) is (almost) optimal among all blackbox transformations, and we show a connection to the algebrization framework of Aaronson and Wigderson (TOCT, 2009)

    Computational integrity with a public random string from quasi-linear PCPs

    Get PDF
    A party running a computation remotely may benefit from misreporting its output, say, to lower its tax. Cryptographic protocols that detect and prevent such falsities hold the promise to enhance the security of decentralized systems with stringent computational integrity requirements, like Bitcoin [Nak09]. To gain public trust it is imperative to use publicly verifiable protocols that have no “backdoors” and which can be set up using only a short public random string. Probabilistically Checkable Proof (PCP) systems [BFL90, BFLS91, AS98, ALM + 98] can be used to construct astonishingly efficient protocols [Kil92, Mic00] of this nature but some of the main components of such systems — proof composition [AS98] and low-degree testing via PCPs of Proximity (PCPPs) [BGH + 05, DR06] — have been considered efficient only asymptotically, for unrealistically large computations; recent cryptographic alternatives [PGHR13, BCG + 13a] suffer from a non-public setup phase. This work introduces SCI, the first implementation of a scalable PCP system (that uses both PCPPs and proof composition). We used SCI to prove correctness of executions of up to 2202^{20} cycles of a simple processor (Figure 1) and calculated (Figure 2) its break-even point [SVP + 12, SMBW12]. The significance of our findings is two-fold: (i) it marks the transition of core PCP techniques (like proof composition and PCPs of Proximity) from mathematical theory to practical system engineering, and (ii) the thresholds obtained are nearly achievable and hence show that PCP-supported computational integrity is closer to reality than previously assumed

    Distribution-Free Proofs of Proximity

    Full text link
    Motivated by the fact that input distributions are often unknown in advance, distribution-free property testing considers a setting in which the algorithmic task is to accept functions f:[n]{0,1}f : [n] \to \{0,1\} having a certain property Π\Pi and reject functions that are ϵ\epsilon-far from Π\Pi, where the distance is measured according to an arbitrary and unknown input distribution D[n]D \sim [n]. As usual in property testing, the tester is required to do so while making only a sublinear number of input queries, but as the distribution is unknown, we also allow a sublinear number of samples from the distribution DD. In this work we initiate the study of distribution-free interactive proofs of proximity (df-IPP) in which the distribution-free testing algorithm is assisted by an all powerful but untrusted prover. Our main result is a df-IPP for any problem ΠNC\Pi \in NC, with O~(n)\tilde{O}(\sqrt{n}) communication, sample, query, and verification complexities, for any proximity parameter ϵ>1/n\epsilon>1/\sqrt{n}. For such proximity parameters, this result matches the parameters of the best-known general purpose IPPs in the standard uniform setting, and is optimal under reasonable cryptographic assumptions. For general values of the proximity parameter ϵ\epsilon, our distribution-free IPP has optimal query complexity O(1/ϵ)O(1/\epsilon) but the communication complexity is O~(ϵn+1/ϵ)\tilde{O}(\epsilon \cdot n + 1/\epsilon), which is worse than what is known for uniform IPPs when ϵ<1/n\epsilon<1/\sqrt{n}. With the aim of improving on this gap, we further show that for IPPs over specialised, but large distribution families, such as sufficiently smooth distributions and product distributions, the communication complexity can be reduced to ϵn(1/ϵ)o(1)\epsilon\cdot n\cdot(1/\epsilon)^{o(1)} (keeping the query complexity roughly the same as before) to match the communication complexity of the uniform case

    A structural theorem for local algorithms with applications to coding, testing, and privacy

    Get PDF
    We prove a general structural theorem for a wide family of local algorithms, which includes property testers, local decoders, and PCPs of proximity. Namely, we show that the structure of every algorithm that makes q adaptive queries and satisfies a natural robustness condition admits a sample-based algorithm with n 1−1/O(q 2 log2 q) sample complexity, following the definition of Goldreich and Ron (TOCT 2016). We prove that this transformation is nearly optimal. Our theorem also admits a scheme for constructing privacy-preserving local algorithms. Using the unified view that our structural theorem provides, we obtain results regarding various types of local algorithms, including the following. We strengthen the state-of-the-art lower bound for relaxed locally decodable codes, obtaining an exponential improvement on the dependency in query complexity; this resolves an open problem raised by Gur and Lachish (SODA 2020). We show that any (constant-query) testable property admits a sample-based tester with sublinear sample complexity; this resolves a problem left open in a work of Fischer, Lachish, and Vasudev (FOCS 2015) by extending their main result to adaptive testers. We prove that the known separation between proofs of proximity and testers is essentially maximal; this resolves a problem left open by Gur and Rothblum (ECCC 2013, Computational Complexity 2018) regarding sublinear-time delegation of computation. Our techniques strongly rely on relaxed sunflower lemmas and the Hajnal–Szemer´edi theorem
    corecore