96 research outputs found

    Strong inapproximability of the shortest reset word

    Full text link
    The \v{C}ern\'y conjecture states that every nn-state synchronizing automaton has a reset word of length at most (n1)2(n-1)^2. We study the hardness of finding short reset words. It is known that the exact version of the problem, i.e., finding the shortest reset word, is NP-hard and coNP-hard, and complete for the DP class, and that approximating the length of the shortest reset word within a factor of O(logn)O(\log n) is NP-hard [Gerbush and Heeringa, CIAA'10], even for the binary alphabet [Berlinkov, DLT'13]. We significantly improve on these results by showing that, for every ϵ>0\epsilon>0, it is NP-hard to approximate the length of the shortest reset word within a factor of n1ϵn^{1-\epsilon}. This is essentially tight since a simple O(n)O(n)-approximation algorithm exists.Comment: extended abstract to appear in MFCS 201

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes

    Rigid Matrices From Rectangular PCPs

    Full text link
    We introduce a variant of PCPs, that we refer to as rectangular PCPs, wherein proofs are thought of as square matrices, and the random coins used by the verifier can be partitioned into two disjoint sets, one determining the row of each query and the other determining the column. We construct PCPs that are efficient, short, smooth and (almost-)rectangular. As a key application, we show that proofs for hard languages in NTIME(2n)NTIME(2^n), when viewed as matrices, are rigid infinitely often. This strengthens and simplifies a recent result of Alman and Chen [FOCS, 2019] constructing explicit rigid matrices in FNP. Namely, we prove the following theorem: - There is a constant δ(0,1)\delta \in (0,1) such that there is an FNP-machine that, for infinitely many NN, on input 1N1^N outputs N×NN \times N matrices with entries in F2\mathbb{F}_2 that are δN2\delta N^2-far (in Hamming distance) from matrices of rank at most 2logN/Ω(loglogN)2^{\log N/\Omega(\log \log N)}. Our construction of rectangular PCPs starts with an analysis of how randomness yields queries in the Reed--Muller-based outer PCP of Ben-Sasson, Goldreich, Harsha, Sudan and Vadhan [SICOMP, 2006; CCC, 2005]. We then show how to preserve rectangularity under PCP composition and a smoothness-inducing transformation. This warrants refined and stronger notions of rectangularity, which we prove for the outer PCP and its transforms.Comment: 36 pages, 3 figure

    Hard Properties with (Very) Short PCPPs and Their Applications

    Get PDF
    We show that there exist properties that are maximally hard for testing, while still admitting PCPPs with a proof size very close to linear. Specifically, for every fixed ?, we construct a property P^(?)? {0,1}^n satisfying the following: Any testing algorithm for P^(?) requires ?(n) many queries, and yet P^(?) has a constant query PCPP whose proof size is O(n?log^(?)n), where log^(?) denotes the ? times iterated log function (e.g., log^(2)n = log log n). The best previously known upper bound on the PCPP proof size for a maximally hard to test property was O(n?polylog(n)). As an immediate application, we obtain stronger separations between the standard testing model and both the tolerant testing model and the erasure-resilient testing model: for every fixed ?, we construct a property that has a constant-query tester, but requires ?(n/log^(?)(n)) queries for every tolerant or erasure-resilient tester

    ZK-PCPs from Leakage-Resilient Secret Sharing

    Get PDF
    Zero-Knowledge PCPs (ZK-PCPs; Kilian, Petrank, and Tardos, STOC `97) are PCPs with the additional zero-knowledge guarantee that the view of any (possibly malicious) verifier making a bounded number of queries to the proof can be efficiently simulated up to a small statistical distance. Similarly, ZK-PCPs of Proximity (ZK-PCPPs; Ishai and Weiss, TCC `14) are PCPPs in which the view of an adversarial verifier can be efficiently simulated with few queries to the input. Previous ZK-PCP constructions obtained an exponential gap between the query complexity q of the honest verifier, and the bound q^* on the queries of a malicious verifier (i.e., q = poly log (q^*)), but required either exponential-time simulation, or adaptive honest verification. This should be contrasted with standard PCPs, that can be verified non-adaptively (i.e., with a single round of queries to the proof). The problem of constructing such ZK-PCPs, even when q^* = q, has remained open since they were first introduced more than 2 decades ago. This question is also open for ZK-PCPPs, for which no construction with non-adaptive honest verification is known (not even with exponential-time simulation). We resolve this question by constructing the first ZK-PCPs and ZK-PCPPs which simultaneously achieve efficient zero-knowledge simulation and non-adaptive honest verification. Our schemes have a square-root query gap, namely q^*/q = O(?n) where n is the input length. Our constructions combine the "MPC-in-the-head" technique (Ishai et al., STOC `07) with leakage-resilient secret sharing. Specifically, we use the MPC-in-the-head technique to construct a ZK-PCP variant over a large alphabet, then employ leakage-resilient secret sharing to design a new alphabet reduction for ZK-PCPs which preserves zero-knowledge

    Smooth and Strong PCPs

    Get PDF
    Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems

    Derandomized Parallel Repetition via Structured PCPs

    Full text link
    A PCP is a proof system for NP in which the proof can be checked by a probabilistic verifier. The verifier is only allowed to read a very small portion of the proof, and in return is allowed to err with some bounded probability. The probability that the verifier accepts a false proof is called the soundness error, and is an important parameter of a PCP system that one seeks to minimize. Constructing PCPs with sub-constant soundness error and, at the same time, a minimal number of queries into the proof (namely two) is especially important due to applications for inapproximability. In this work we construct such PCP verifiers, i.e., PCPs that make only two queries and have sub-constant soundness error. Our construction can be viewed as a combinatorial alternative to the "manifold vs. point" construction, which is the only construction in the literature for this parameter range. The "manifold vs. point" PCP is based on a low degree test, while our construction is based on a direct product test. We also extend our construction to yield a decodable PCP (dPCP) with the same parameters. By plugging in this dPCP into the scheme of Dinur and Harsha (FOCS 2009) one gets an alternative construction of the result of Moshkovitz and Raz (FOCS 2008), namely: a construction of two-query PCPs with small soundness error and small alphabet size. Our construction of a PCP is based on extending the derandomized direct product test of Impagliazzo, Kabanets and Wigderson (STOC 09) to a derandomized parallel repetition theorem. More accurately, our PCP construction is obtained in two steps. We first prove a derandomized parallel repetition theorem for specially structured PCPs. Then, we show that any PCP can be transformed into one that has the required structure, by embedding it on a de-Bruijn graph

    Tighter MA/1 Circuit Lower Bounds from Verifier Efficient PCPs for PSPACE

    Get PDF
    corecore