25 research outputs found

    On the Power of Many One-Bit Provers

    Full text link
    We study the class of languages, denoted by \MIP[k, 1-\epsilon, s], which have kk-prover games where each prover just sends a \emph{single} bit, with completeness 1ϵ1-\epsilon and soundness error ss. For the case that k=1k=1 (i.e., for the case of interactive proofs), Goldreich, Vadhan and Wigderson ({\em Computational Complexity'02}) demonstrate that \SZK exactly characterizes languages having 1-bit proof systems with"non-trivial" soundness (i.e., 1/2<s12ϵ1/2 < s \leq 1-2\epsilon). We demonstrate that for the case that k2k\geq 2, 1-bit kk-prover games exhibit a significantly richer structure: + (Folklore) When s12kϵs \leq \frac{1}{2^k} - \epsilon, \MIP[k, 1-\epsilon, s] = \BPP; + When 12k+ϵs<22kϵ\frac{1}{2^k} + \epsilon \leq s < \frac{2}{2^k}-\epsilon, \MIP[k, 1-\epsilon, s] = \SZK; + When s22k+ϵs \ge \frac{2}{2^k} + \epsilon, \AM \subseteq \MIP[k, 1-\epsilon, s]; + For s0.62k/2ks \le 0.62 k/2^k and sufficiently large kk, \MIP[k, 1-\epsilon, s] \subseteq \EXP; + For s2k/2ks \ge 2k/2^{k}, \MIP[k, 1, 1-\epsilon, s] = \NEXP. As such, 1-bit kk-prover games yield a natural "quantitative" approach to relating complexity classes such as \BPP,\SZK,\AM, \EXP, and \NEXP. We leave open the question of whether a more fine-grained hierarchy (between \AM and \NEXP) can be established for the case when s22k+ϵs \geq \frac{2}{2^k} + \epsilon

    Perfect zero knowledge for quantum multiprover interactive proofs

    Full text link
    In this work we consider the interplay between multiprover interactive proofs, quantum entanglement, and zero knowledge proofs - notions that are central pillars of complexity theory, quantum information and cryptography. In particular, we study the relationship between the complexity class MIP^*, the set of languages decidable by multiprover interactive proofs with quantumly entangled provers, and the class PZKMIP^*, which is the set of languages decidable by MIP^* protocols that furthermore possess the perfect zero knowledge property. Our main result is that the two classes are equal, i.e., MIP=^* = PZKMIP^*. This result provides a quantum analogue of the celebrated result of Ben-Or, Goldwasser, Kilian, and Wigderson (STOC 1988) who show that MIP == PZKMIP (in other words, all classical multiprover interactive protocols can be made zero knowledge). We prove our result by showing that every MIP^* protocol can be efficiently transformed into an equivalent zero knowledge MIP^* protocol in a manner that preserves the completeness-soundness gap. Combining our transformation with previous results by Slofstra (Forum of Mathematics, Pi 2019) and Fitzsimons, Ji, Vidick and Yuen (STOC 2019), we obtain the corollary that all co-recursively enumerable languages (which include undecidable problems as well as all decidable problems) have zero knowledge MIP^* protocols with vanishing promise gap

    Zero-Knowledge Proofs of Proximity

    Get PDF
    Interactive proofs of proximity (IPPs) are interactive proofs in which the verifier runs in time sub-linear in the input length. Since the verifier cannot even read the entire input, following the property testing literature, we only require that the verifier reject inputs that are far from the language (and, as usual, accept inputs that are in the language). In this work, we initiate the study of zero-knowledge proofs of proximity (ZKPP). A ZKPP convinces a sub-linear time verifier that the input is close to the language (similarly to an IPP) while simultaneously guaranteeing a natural zero-knowledge property. Specifically, the verifier learns nothing beyond (1) the fact that the input is in the language, and (2) what it could additionally infer by reading a few bits of the input. Our main focus is the setting of statistical zero-knowledge where we show that the following hold unconditionally (where N denotes the input length): - Statistical ZKPPs can be sub-exponentially more efficient than property testers (or even non-interactive IPPs): We show a natural property which has a statistical ZKPP with a polylog(N) time verifier, but requires Omega(sqrt(N)) queries (and hence also runtime) for every property tester. - Statistical ZKPPs can be sub-exponentially less efficient than IPPs: We show a property which has an IPP with a polylog(N) time verifier, but cannot have a statistical ZKPP with even an N^(o(1)) time verifier. - Statistical ZKPPs for some graph-based properties such as promise versions of expansion and bipartiteness, in the bounded degree graph model, with polylog(N) time verifiers exist. Lastly, we also consider the computational setting where we show that: - Assuming the existence of one-way functions, every language computable either in (logspace uniform) NC or in SC, has a computational ZKPP with a (roughly) sqrt(N) time verifier. - Assuming the existence of collision-resistant hash functions, every language in NP has a statistical zero-knowledge argument of proximity with a polylog(N) time verifier

    Zero Knowledge Protocols from Succinct Constraint Detection

    Get PDF
    We study the problem of constructing proof systems that achieve both soundness and zero knowledge unconditionally (without relying on intractability assumptions). Known techniques for this goal are primarily *combinatorial*, despite the fact that constructions of interactive proofs (IPs) and probabilistically checkable proofs (PCPs) heavily rely on *algebraic* techniques to achieve their properties. We present simple and natural modifications of well-known algebraic IP and PCP protocols that achieve unconditional (perfect) zero knowledge in recently introduced models, overcoming limitations of known techniques. 1. We modify the PCP of Ben-Sasson and Sudan [BS08] to obtain zero knowledge for NEXP in the model of Interactive Oracle Proofs [BCS16,RRR16], where the verifier, in each round, receives a PCP from the prover. 2. We modify the IP of Lund, Fortnow, Karloff, and Nisan [LFKN92] to obtain zero knowledge for #P in the model of Interactive PCPs [KR08], where the verifier first receives a PCP from the prover and then interacts with him. The simulators in our zero knowledge protocols rely on solving a problem that lies at the intersection of coding theory, linear algebra, and computational complexity, which we call the *succinct constraint detection* problem, and consists of detecting dual constraints with polynomial support size for codes of exponential block length. Our two results rely on solutions to this problem for fundamental classes of linear codes: * An algorithm to detect constraints for Reed--Muller codes of exponential length. This algorithm exploits the Raz--Shpilka [RS05] deterministic polynomial identity testing algorithm, and shows, to our knowledge, a first connection of algebraic complexity theory with zero knowledge. * An algorithm to detect constraints for PCPs of Proximity of Reed--Solomon codes [BS08] of exponential degree. This algorithm exploits the recursive structure of the PCPs of Proximity to show that small-support constraints are locally spanned by a small number of small-support constraints

    Quasi-Linear Size Zero Knowledge from Linear-Algebraic PCPs

    Get PDF
    The seminal result that every language having an interactive proof also has a zero-knowledge interactive proof assumes the existence of one-way functions. Ostrovsky and Wigderson (ISTCS 1993) proved that this assumption is necessary: if one-way functions do not exist, then only languages in BPP have zero-knowledge interactive proofs. Ben-Or et al. (STOC 1988) proved that, nevertheless, every language having a multi-prover interactive proof also has a zero-knowledge multi-prover interactive proof, unconditionally. Their work led to, among many other things, a line of work studying zero knowledge without intractability assumptions. In this line of work, Kilian, Petrank, and Tardos (STOC 1997) defined and constructed zero-knowledge probabilistically checkable proofs (PCPs). While PCPs with quasilinear-size proof length, but without zero knowledge, are known, no such result is known for zero knowledge PCPs. In this work, we show how to construct ``2-round\u27\u27 PCPs that are zero knowledge and of length \tilde{O}(K) where K is the number of queries made by a malicious polynomial time verifier. Previous solutions required PCPs of length at least K^6 to maintain zero knowledge. In this model, which we call *duplex PCP* (DPCP), the verifier first receives an oracle string from the prover, then replies with a message, and then receives another oracle string from the prover; a malicious verifier can make up to K queries in total to both oracles. Deviating from previous works, our constructions do not invoke the PCP Theorem as a blackbox but instead rely on certain algebraic properties of a specific family of PCPs. We show that if the PCP has a certain linear algebraic structure --- which many central constructions can be shown to possess, including [BFLS91,ALMSS98,BS08] --- we can add the zero knowledge property at virtually no cost (up to additive lower order terms) while introducing only minor modifications in the algorithms of the prover and verifier. We believe that our linear-algebraic characterization of PCPs may be of independent interest, as it gives a simplified way to view previous well-studied PCP constructions

    An exponential separation between MA and AM proofs of proximity

    Get PDF
    Interactive proofs of proximity allow a sublinear-time verifier to check that a given input is close to the language, using a small amount of communication with a powerful (but untrusted) prover. In this work we consider two natural minimally interactive variants of such proofs systems, in which the prover only sends a single message, referred to as the proof. The first variant, known as MA-proofs of Proximity (MAP), is fully non-interactive, meaning that the proof is a function of the input only. The second variant, known as AM-proofs of Proximity (AMP), allows the proof to additionally depend on the verifier's (entire) random string. The complexity of both MAPs and AMPs is the total number of bits that the verifier observes - namely, the sum of the proof length and query complexity. Our main result is an exponential separation between the power of MAPs and AMPs. Specifically, we exhibit an explicit and natural property Pi that admits an AMP with complexity O(log n), whereas any MAP for Pi has complexity Omega~(n^{1/4}), where n denotes the length of the input in bits. Our MAP lower bound also yields an alternate proof, which is more general and arguably much simpler, for a recent result of Fischer et al. (ITCS, 2014). Lastly, we also consider the notion of oblivious proofs of proximity, in which the verifier's queries are oblivious to the proof. In this setting we show that AMPs can only be quadratically stronger than MAPs. As an application of this result, we show an exponential separation between the power of public and private coin for oblivious interactive proofs of proximity

    Computação quântica : autômatos, jogos e complexidade

    Get PDF
    Orientador: Arnaldo Vieira MouraDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Desde seu surgimento, Teoria da Computação tem lidado com modelos computacionais de maneira matemática e abstrata. A noção de computação eficiente foi investigada usando esses modelos sem procurar entender as capacidades e limitações inerentes ao mundo físico. A Computação Quântica representa uma ruptura com esse paradigma. Enraizada nos postulados da Mecânica Quântica, ela é capaz de atribuir um sentido físico preciso à computação segundo nosso melhor entendimento da natureza. Esses postulados dão origem a propriedades fundamentalmente diferentes, uma em especial, chamada emaranhamento, é de importância central para computação e processamento de informação. O emaranhamento captura uma noção de correlação que é única a modelos quânticos. Essas correlações quânticas podem ser mais fortes do que qualquer correlação clássica estando dessa forma no coração de algumas capacidades quânticas que vão além do clássico. Nessa dissertação, nós investigamos o emaranhamento da perspectiva da complexidade computacional quântica. Mais precisamente, nós estudamos uma classe bem conhecida, definida em termos de verificação de provas, em que um verificador tem acesso à múltiplas provas não emaranhadas (QMA(k)). Assumir que as provas não contêm correlações quânticas parece ser uma hipótese não trivial, potencialmente fazendo com que essa classe seja maior do que aquela em que há apenas uma prova. Contudo, encontrar cotas de complexidade justas para QMA(k) permanece uma questão central sem resposta por mais de uma década. Nesse contexto, nossa contribuição é tripla. Primeiramente, estudamos classes relacionadas mostrando como alguns recursos computacionais podem afetar seu poder de forma a melhorar a compreensão a respeito da própria classe QMA(k). Em seguida, estabelecemos uma relação entre Probabilistically Checkable Proofs (PCP) clássicos e QMA(k). Isso nos permite recuperar resultados conhecidos de maneira unificada e simplificada. Para finalizar essa parte, mostramos que alguns caminhos para responder essa questão em aberto estão obstruídos por dificuldades computacionais. Em um segundo momento, voltamos nossa atenção para modelos restritos de computação quântica, mais especificamente, autômatos quânticos finitos. Um modelo conhecido como Two-way Quantum Classical Finite Automaton (2QCFA) é o objeto principal de nossa pesquisa. Seu estudo tem o intuito de revelar o poder computacional provido por memória quântica de dimensão finita. Nos estendemos esse autômato com a capacidade de colocar um número finito de marcadores na fita de entrada. Para qualquer número de marcadores, mostramos que essa extensão é mais poderosa do que seus análogos clássicos determinístico e probabilístico. Além de trazer avanços em duas linhas complementares de pesquisa, essa dissertação provê uma vasta exposição a ambos os campos: complexidade computacional e autômatosAbstract: Since its inception, Theoretical Computer Science has dealt with models of computation primarily in a very abstract and mathematical way. The notion of efficient computation was investigated using these models mainly without seeking to understand the inherent capabilities and limitations of the actual physical world. In this regard, Quantum Computing represents a rupture with respect to this paradigm. Rooted on the postulates of Quantum Mechanics, it is able to attribute a precise physical notion to computation as far as our understanding of nature goes. These postulates give rise to fundamentally different properties one of which, namely entanglement, is of central importance to computation and information processing tasks. Entanglement captures a notion of correlation unique to quantum models. This quantum correlation can be stronger than any classical one, thus being at the heart of some quantum super-classical capabilities. In this thesis, we investigate entanglement from the perspective of quantum computational complexity. More precisely, we study a well known complexity class, defined in terms of proof verification, in which a verifier has access to multiple unentangled quantum proofs (QMA(k)). Assuming the proofs do not exhibit quantum correlations seems to be a non-trivial hypothesis, potentially making this class larger than the one in which only a single proof is given. Notwithstanding, finding tight complexity bounds for QMA(k) has been a central open question in quantum complexity for over a decade. In this context, our contributions are threefold. Firstly, we study closely related classes showing how computational resources may affect its power in order to shed some light on \QMA(k) itself. Secondly, we establish a relationship between classical Probabilistically Checkable Proofs and QMA(k) allowing us to recover known results in unified and simplified way, besides exposing the interplay between them. Thirdly, we show that some paths to settle this open question are obstructed by computational hardness. In a second moment, we turn our attention to restricted models of quantum computation, more specifically, quantum finite automata. A model known as Two-way Quantum Classical Finite Automaton (2QCFA) is the main object of our inquiry. Its study is intended to reveal the computational power provided by finite dimensional quantum memory. We extend this automaton with the capability of placing a finite number of markers in the input tape. For any number of markers, we show that this extension is more powerful than its classical deterministic and probabilistic analogues. Besides bringing advances to these two complementary lines of inquiry, this thesis also provides a vast exposition to both subjects: computational complexity and automata theoryMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Practical Witness-Key-Agreement for Blockchain-based Dark Pools Financial Trading

    Get PDF
    We introduce a new cryptographic scheme, Witness Key Agreement (WKA), that allows a party to securely agree on a secret key with a counter party holding publicly committed information only if the counter party also owns a secret witness in a desired (arithmetic) relation with the committed information. Our motivating applications are over-the-counter (OTC) markets and dark pools, popular trading mechanisms. In such pools investors wish to communicate only to trading partners whose transaction conditions and asset holdings satisfy some constraints. The investor must establish a secure, authenticated channel with eligible traders where the latter committed information matches a desired relation. At the same time traders should be able to show eligibility while keeping their financial information secret. We construct a WKA scheme for languages of statements proven in the designated-verifier Succinct Zero-Knowledge Non-Interactive Argument of Knowledge Proof System (zk-SNARK). We illustrate the practical feasibility of our construction with some arithmetic circuits of practical interest by using data from US$ denominated corporate securities traded on Bloomberg Tradebook

    Doubly-efficient zkSNARKs without trusted setup

    Get PDF
    We present a zero-knowledge argument for NP with low communication complexity, low concrete cost for both the prover and the verifier, and no trusted setup, based on standard cryptographic assumptions. Communication is proportional to dlogGd\cdot\log G (for dd the depth and GG the width of the verifying circuit) plus the square root of the witness size. When applied to batched or data-parallel statements, the prover\u27s runtime is linear and the verifier\u27s is sub-linear in the verifying circuit size, both with good constants. In addition, witness-related communication can be reduced, at the cost of increased verifier runtime, by leveraging a new commitment scheme for multilinear polynomials, which may be of independent interest. These properties represent a new point in the tradeoffs among setup, complexity assumptions, proof size, and computational cost. We apply the Fiat-Shamir heuristic to this argument to produce a zero-knowledge succinct non-interactive argument of knowledge (zkSNARK) in the random oracle model, based on the discrete log assumption, which we call Hyrax. We implement Hyrax and evaluate it against five state-of-the-art baseline systems. Our evaluation shows that, even for modest problem sizes, Hyrax gives smaller proofs than all but the most computationally costly baseline, and that its prover and verifier are each faster than three of the five baselines
    corecore