81 research outputs found

    Hard Properties with (Very) Short PCPPs and Their Applications

    Get PDF
    We show that there exist properties that are maximally hard for testing, while still admitting PCPPs with a proof size very close to linear. Specifically, for every fixed ?, we construct a property P^(?)? {0,1}^n satisfying the following: Any testing algorithm for P^(?) requires ?(n) many queries, and yet P^(?) has a constant query PCPP whose proof size is O(n?log^(?)n), where log^(?) denotes the ? times iterated log function (e.g., log^(2)n = log log n). The best previously known upper bound on the PCPP proof size for a maximally hard to test property was O(n?polylog(n)). As an immediate application, we obtain stronger separations between the standard testing model and both the tolerant testing model and the erasure-resilient testing model: for every fixed ?, we construct a property that has a constant-query tester, but requires ?(n/log^(?)(n)) queries for every tolerant or erasure-resilient tester

    Smooth and Strong PCPs

    Get PDF
    Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs: - A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim. - A PCP is smooth if each location in a proof is queried with equal probability. We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth. Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems

    Derandomized Parallel Repetition via Structured PCPs

    Full text link
    A PCP is a proof system for NP in which the proof can be checked by a probabilistic verifier. The verifier is only allowed to read a very small portion of the proof, and in return is allowed to err with some bounded probability. The probability that the verifier accepts a false proof is called the soundness error, and is an important parameter of a PCP system that one seeks to minimize. Constructing PCPs with sub-constant soundness error and, at the same time, a minimal number of queries into the proof (namely two) is especially important due to applications for inapproximability. In this work we construct such PCP verifiers, i.e., PCPs that make only two queries and have sub-constant soundness error. Our construction can be viewed as a combinatorial alternative to the "manifold vs. point" construction, which is the only construction in the literature for this parameter range. The "manifold vs. point" PCP is based on a low degree test, while our construction is based on a direct product test. We also extend our construction to yield a decodable PCP (dPCP) with the same parameters. By plugging in this dPCP into the scheme of Dinur and Harsha (FOCS 2009) one gets an alternative construction of the result of Moshkovitz and Raz (FOCS 2008), namely: a construction of two-query PCPs with small soundness error and small alphabet size. Our construction of a PCP is based on extending the derandomized direct product test of Impagliazzo, Kabanets and Wigderson (STOC 09) to a derandomized parallel repetition theorem. More accurately, our PCP construction is obtained in two steps. We first prove a derandomized parallel repetition theorem for specially structured PCPs. Then, we show that any PCP can be transformed into one that has the required structure, by embedding it on a de-Bruijn graph

    Quantum Locally Testable Codes

    Full text link
    We initiate the study of quantum Locally Testable Codes (qLTCs). We provide a definition together with a simplification, denoted sLTCs, for the special case of stabilizer codes, together with some basic results using those definitions. The most crucial parameter of such codes is their soundness, R(δ)R(\delta), namely, the probability that a randomly chosen constraint is violated as a function of the distance of a word from the code (δ\delta, the relative distance from the code, is called the proximity). We then proceed to study limitations on qLTCs. In our first main result we prove a surprising, inherently quantum, property of sLTCs: for small values of proximity, the better the small-set expansion of the interaction graph of the constraints, the less sound the qLTC becomes. This phenomenon, which can be attributed to monogamy of entanglement, stands in sharp contrast to the classical setting. The complementary, more intuitive, result also holds: an upper bound on the soundness when the code is defined on poor small-set expanders (a bound which turns out to be far more difficult to show in the quantum case). Together we arrive at a quantum upper-bound on the soundness of stabilizer qLTCs set on any graph, which does not hold in the classical case. Many open questions are raised regarding what possible parameters are achievable for qLTCs. In the appendix we also define a quantum analogue of PCPs of proximity (PCPPs) and point out that the result of Ben-Sasson et. al. by which PCPPs imply LTCs with related parameters, carries over to the sLTCs. This creates a first link between qLTCs and quantum PCPs.Comment: Some of the results presented here appeared in an initial form in our quant-ph submission arXiv:1301.3407. This is a much extended and improved version. 30 pages, no figure

    Every set in P is strongly testable under a suitable encoding

    Get PDF
    We show that every set in P is strongly testable under a suitable encoding. By "strongly testable" we mean having a (proximity oblivious) tester that makes a constant number of queries and rejects with probability that is proportional to the distance of the tested object from the property. By a "suitable encoding" we mean one that is polynomial-time computable and invertible. This result stands in contrast to the known fact that some sets in P are extremely hard to test, providing another demonstration of the crucial role of representation in the context of property testing. The testing result is proved by showing that any set in P has a strong canonical PCP, where canonical means that (for yes-instances) there exists a single proof that is accepted with probability 1 by the system, whereas all other potential proofs are rejected with probability proportional to their distance from this proof. In fact, we show that UP equals the class of sets having strong canonical PCPs (of logarithmic randomness), whereas the class of sets having strong canonical PCPs with polynomial proof length equals "unambiguous- MA". Actually, for the testing result, we use a PCP-of-Proximity version of the foregoing notion and an analogous positive result (i.e., strong canonical PCPPs of logarithmic randomness for any set in UP)

    Combinatorial Construction of Locally Testable Codes

    Get PDF
    An error correcting code is said to be locally testable if there is a test that checks whether a given string is a codeword, or rather far from the code, by reading only a constant number of symbols of the string. While the best known construction of LTCs by Ben-Sasson and Sudan (STOC 2005) and Dinur (J. ACM 54(3)) achieves very e cient parameters, it relies heavily on algebraic tools and on PCP machinery. In this work we present a new and arguably simpler construction of LTCs that is purely combinatorial, does not rely on PCP machinery and matches the parameters of the best known construction. However, unlike the latter construction, our construction is not entirely explicit

    NEEXP is Contained in MIP*

    Get PDF
    We study multiprover interactive proof systems. The power of classical multiprover interactive proof systems, in which the provers do not share entanglement, was characterized in a famous work by Babai, Fortnow, and Lund (Computational Complexity 1991), whose main result was the equality MIP = NEXP. The power of quantum multiprover interactive proof systems, in which the provers are allowed to share entanglement, has proven to be much more difficult to characterize. The best known lower-bound on MIP* is NEXP ⊆ MIP*, due to Ito and Vidick (FOCS 2012). As for upper bounds, MIP* could be as large as RE, the class of recursively enumerable languages. The main result of this work is the inclusion of NEEXP = NTIME[2^(2poly(n))] ⊆ MIP*. This is an exponential improvement over the prior lower bound and shows that proof systems with entangled provers are at least exponentially more powerful than classical provers. In our protocol the verifier delegates a classical, exponentially large MIP protocol for NEEXP to two entangled provers: the provers obtain their exponentially large questions by measuring their shared state, and use a classical PCP to certify the correctness of their exponentially-long answers. For the soundness of our protocol, it is crucial that each player should not only sample its own question correctly but also avoid performing measurements that would reveal the other player's sampled question. We ensure this by commanding the players to perform a complementary measurement, relying on the Heisenberg uncertainty principle to prevent the forbidden measurements from being performed

    Adoption of Electronic Health Record Systems Within Primary Care Practices

    Get PDF
    Primary care physicians (PCPPs) have been slow to implement electronic health records (EHRs), even though there is a U.S. federal requirement to implement EHRs. The purpose of this phenomenological study was to determine why PCPPs have been slow to adopt electronic health record (EHR) systems despite the potential to increase efficiency and quality of health care. The complex adaptive systems theory (CAS) served as the conceptual framework for this study. Twenty-six PCPPs were interviewed from primary care practices (PCPs) based in southwestern Ohio. The data were collected through a semistructured interview format and analyzed using a modified van Kaam method. Several themes emerged as barriers to EHR implementation, including staff training on the new EHR system, the decrease in productivity experienced by primary care practice (PCP) staff adapting to the new EHR system, and system usability and technical support after adoption. The findings may contribute to the body of knowledge regarding EHR system implementation and assist healthcare providers who are slow to adopt EHRs. Additionally, findings could contribute to social change by reducing healthcare costs, increasing patient access to care, and improving the efficacy of patient diagnosis and treatment

    On Tolerant Testing and Tolerant Junta Testing

    Get PDF
    Over the past few decades property testing has became an active field of study in theoretical computer science. The algorithmic task is to determine, given access to an unknown large object (e.g., function, graph, probability distribution), whether it has some fixed property, or it is far from any object having the property. The approximate nature of these algorithms allows in many cases to achieve a significant saving in running time, and obtain \emph{sublinear} running time. Nevertheless, in various settings and applications, accepting only inputs that exactly have a certain property is too restrictive, and it is more beneficial to distinguish between inputs that are close to having the property, and those that are far from it. The framework of \emph{tolerant} testing tackles this exact problem. In this thesis, we will focus on one of the most fundamental properties of Boolean functions: the property of being a \emph{kk-junta} (i.e., being dependent on at most kk variables). The first chapter focuses on algorithms for tolerant junta testing. In particular, we show that there exists a \poly(k) query algorithm distinguishing functions close to kk-juntas and functions that are far from 2k2k-juntas. We also show how to obtain a trade-off between the ``tolerance" of the algorithm and its query complexity. The second chapter focuses on establishing a query lower bound for tolerant junta testing. In particular, we show that any non-adaptive tolerant junta tester, is required to make at least \Omega(k^2/\polylog k) queries. The third chapter considers tolerant testing in a more general context, and asks whether tolerant testing is strictly harder than standard testing. In particular, we show that for any constant N\ell\in \N, there exists a property \calP_\ell such that \calP_\ell can be tested in O(1)O(1) queries, but any tolerant tester for \calP_\ell is required to make at least Ω(n/log()n)\Omega(n/\log^{(\ell)}n) queries (where log()\log^{(\ell)} denote the \ell times iterated log function). The final chapter focuses on applications. We show how to leverage the techniques developed in previous chapters to obtain results on tolerant isomorphism testing, unateness testing, and erasure resilient testing
    corecore