498 research outputs found
Probabilistically Checkable Proofs of Proximity with Zero-Knowledge
A probabilistically Checkable Proof (PCP) allows a randomized verifier, with oracle access to a purported proof, to probabilistically verify an input statement of the form by querying only few bits of the proof. A PCP of proximity (PCPP) has the additional feature of allowing the verifier to query only few bits of the input , where if the input is accepted then the verifier is guaranteed that (with high probability) the input is close to some .
Motivated by their usefulness for sublinear-communication cryptography, we initiate the study of a natural zero-knowledge variant of PCPP (ZKPCPP), where the view of any verifier making a bounded number of queries can be efficiently simulated by making the same number of queries to the input oracle alone. This new notion provides a useful extension of the standard notion of zero-knowledge PCPs. We obtain two types of results.
1. Constructions. We obtain the first constructions of query-efficient ZKPCPPs via a general transformation which combines standard query-efficient PCPPs with protocols for secure multiparty computation. As a byproduct, our construction provides a conceptually simpler alternative to a previous construction of honest-verifier zero-knowledge PCPs due to Dwork et al. (Crypto \u2792).
2. Applications. We motivate the notion of ZKPCPPs by applying it towards sublinear-communication implementations of commit-and-prove functionalities. Concretely, we present the first sublinear-communication commit-and-prove protocols which make a black-box use of a collision-resistant hash function, and the first such multiparty protocols which offer information-theoretic security in the presence of an honest majority
Probabilistic Proof Systems
Various types of probabilistic proof systems have played a central role in the development of computer science in the last decade. In this exposition, we concentrate on three such proof systems -- interactive proofs, zero-knowledge proofs, and probabilistic checkable proofs -- stressing the essential role of randomness in each of them. This exposition is an expanded version of a survey written for the proceedings of the International Congress of Mathematicians (ICM94) held in Zurich in 1994. It is hope that this exposition may be accessible to a broad audience of computer scientists and mathematians
State of the Art Report: Verified Computation
This report describes the state of the art in verifiable computation. The
problem being solved is the following:
The Verifiable Computation Problem (Verifiable Computing Problem) Suppose we
have two computing agents. The first agent is the verifier, and the second
agent is the prover. The verifier wants the prover to perform a computation.
The verifier sends a description of the computation to the prover. Once the
prover has completed the task, the prover returns the output to the verifier.
The output will contain proof. The verifier can use this proof to check if the
prover computed the output correctly. The check is not required to verify the
algorithm used in the computation. Instead, it is a check that the prover
computed the output using the computation specified by the verifier. The effort
required for the check should be much less than that required to perform the
computation.
This state-of-the-art report surveys 128 papers from the literature
comprising more than 4,000 pages. Other papers and books were surveyed but were
omitted. The papers surveyed were overwhelmingly mathematical. We have
summarised the major concepts that form the foundations for verifiable
computation. The report contains two main sections. The first, larger section
covers the theoretical foundations for probabilistically checkable and
zero-knowledge proofs. The second section contains a description of the current
practice in verifiable computation. Two further reports will cover (i) military
applications of verifiable computation and (ii) a collection of technical
demonstrators. The first of these is intended to be read by those who want to
know what applications are enabled by the current state of the art in
verifiable computation. The second is for those who want to see practical tools
and conduct experiments themselves.Comment: 54 page
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Efficient holographic proofs
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1996.Includes bibliographical references (p. 57-63).by Alexander Craig Russell.Ph.D
Fast Reed-Solomon Interactive Oracle Proofs of Proximity
The family of Reed-Solomon (RS) codes plays a prominent role in the construction of quasilinear probabilistically checkable proofs (PCPs) and interactive oracle proofs (IOPs) with perfect zero knowledge and polylogarithmic verifiers. The large concrete computational complexity required to prove membership in RS codes is one of the biggest obstacles to deploying such PCP/IOP systems in practice.
To advance on this problem we present a new interactive oracle proof of proximity (IOPP) for RS codes; we call it the Fast RS IOPP (FRI) because (i) it resembles the ubiquitous Fast Fourier Transform (FFT) and (ii) the arithmetic complexity of its prover is strictly linear and that of the verifier is strictly logarithmic (in comparison, FFT arithmetic complexity is quasi-linear but not strictly linear). Prior RS IOPPs and PCPs of proximity (PCPPs) required super-linear proving time even for polynomially large query complexity.
For codes of block-length N, the arithmetic complexity of the (interactive) FRI prover is less than 6 * N, while the (interactive) FRI verifier has arithmetic complexity <= 21 * log N, query complexity 2 * log N and constant soundness - words that are delta-far from the code are rejected with probability min{delta * (1-o(1)),delta_0} where delta_0 is a positive constant that depends mainly on the code rate. The particular combination of query complexity and soundness obtained by FRI is better than that of the quasilinear PCPP of [Ben-Sasson and Sudan, SICOMP 2008], even with the tighter soundness analysis of [Ben-Sasson et al., STOC 2013; ECCC 2016]; consequently, FRI is likely to facilitate better concretely efficient zero knowledge proof and argument systems.
Previous concretely efficient PCPPs and IOPPs suffered a constant multiplicative factor loss in soundness with each round of "proof composition" and thus used at most O(log log N) rounds. We show that when delta is smaller than the unique decoding radius of the code, FRI suffers only a negligible additive loss in soundness. This observation allows us to increase the number of "proof composition" rounds to Theta(log N) and thereby reduce prover and verifier running time for fixed soundness
PCPs and Instance Compression from a Cryptographic Lens
Modern cryptography fundamentally relies on the assumption that the adversary trying to break the scheme is computationally bounded. This assumption lets us construct cryptographic protocols and primitives that are known to be impossible otherwise. In this work we explore the effect of bounding the adversary\u27s power in other information theoretic proof-systems and show how to use this assumption to bypass impossibility results.
We first consider the question of constructing succinct PCPs. These are PCPs whose length is polynomial only in the length of the original NP witness (in contrast to standard PCPs whose length is proportional to the non-deterministic verification time).
Unfortunately, succinct PCPs are known to be impossible to construct under standard complexity assumptions. Assuming the sub-exponential hardness of the learning with errors (LWE) problem, we construct succinct probabilistically checkable arguments or PCAs (Zimand 2001, Kalai and Raz 2009), which are PCPs in which soundness is guaranteed against efficiently generated false proofs. Our PCA construction is for every NP relation that can be verified by a small-depth circuit (e.g., SAT, clique, TSP, etc.) and in contrast to prior work is publicly verifiable and has constant query complexity. Curiously, we also show, as a proof-of-concept, that such publicly-verifiable PCAs can be used to derive hardness of approximation results.
Second, we consider the notion of Instance Compression (Harnik and Naor, 2006). An instance compression scheme lets one compress, for example, a CNF formula on variables and clauses to a new formula with only clauses, so that is satisfiable if and only if is satisfiable. Instance compression has been shown to be closely related to succinct PCPs and is similarly highly unlikely to exist. We introduce a computational analog of instance compression in which we require that if is unsatisfiable then is effectively unsatisfiable, in the sense that it is computationally infeasible to find a satisfying assignment for (although such an assignment may exist). Assuming the same sub-exponential LWE assumption, we construct such computational instance compression schemes for every bounded-depth NP relation. As an application, this lets one compress formulas into a single short formula that is effectively satisfiable if and only if at least one of the original formulas was satisfiable
- …