50 research outputs found
Recommended from our members
Comparing Entropies in Statistical Zero Knowledge with Applications to the Structure of SZK
We consider the following (promise) problem, denoted ED (for Entropy Difference): The input is a pairs of circuits, and YES instances (resp., NO instances) are such pairs in which the first (resp., second) circuit generates a distribution with noticeably higher entropy.
On one hand we show that any language having a (honest-verifier) statistical zero-knowledge proof is Karp-reducible to ED. On the other hand, we present a public-coin (honest-verifier) statistical zero-knowledge proof for ED. Thus, we obtain an alternative proof of Okamoto's result by which HVSZK (i.e., honest-verifier statistical zero knowledge) equals public-coin HVSZK. The new proof is much simpler than the original one. The above also yields a trivial proof that HVSZK is closed under complementation (since ED easily reduces to its complement). Among the new results obtained is an equivalence of a weak notion of statistical zero knowledge to the standard one.Engineering and Applied Science
Brief Announcement: Zero-Knowledge Protocols for Search Problems
We consider natural ways to extend the notion of Zero-Knowledge (ZK) Proofs beyond decision problems. Specifically, we consider search problems, and define zero-knowledge proofs in this context as interactive protocols in which the prover can establish the correctness of a solution to a given instance without the verifier learning anything beyond the intended solution, even if it deviates from the protocol.
The goal of this work is to initiate a study of Search Zero-Knowledge (search-ZK), the class of search problems for which such systems exist. This class trivially contains search problems where the validity of a solution can be efficiently verified (using a single message proof containing only the solution). A slightly less obvious, but still straightforward, way to obtain zero-knowledge proofs for search problems is to let the prover send a solution and prove in zero-knowledge that the instance-solution pair is valid. However, there may be other ways to obtain such zero-knowledge proofs, and they may be more advantageous.
In fact, we prove that there are search problems for which the aforementioned approach fails, but still search zero-knowledge protocols exist. On the other hand, we show sufficient conditions for search problems under which some form of zero-knowledge can be obtained using the straightforward way
Communication Complexity of Statistical Distance
We prove nearly matching upper and lower bounds on the randomized communication complexity of the following problem: Alice and Bob are each given a probability distribution over elements, and they wish to estimate within +-epsilon the statistical (total variation) distance between their distributions. For some range of parameters, there is up to a log(n) factor gap between the upper and lower bounds, and we identify a barrier to using information complexity techniques to improve the lower bound in this case. We also prove a side result that we discovered along the way: the randomized communication complexity of n-bit Majority composed with n-bit Greater-Than is Theta(n log n)
From Laconic Zero-Knowledge to Public-Key Cryptography
Since its inception, public-key encryption (PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language in the intersection of NP and SZK.
In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the NP witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers.
Languages in NP with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing PKE which, in particular, captures many of the assumptions that were already known to yield PKE.
We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to PKE, thereby giving a complexity-theoretic characterization of PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol
On the Relationship between Statistical Zero-Knowledge and Statistical Randomized Encodings
\emph{Statistical Zero-knowledge proofs} (Goldwasser, Micali and Rackoff, SICOMP 1989) allow a computationally-unbounded server to convince a computationally-limited client that an input is in a language without revealing any additional information about that the client cannot compute by herself. \emph{Randomized encoding} (RE) of functions (Ishai and Kushilevitz, FOCS 2000) allows a computationally-limited client to publish a single (randomized) message, \enc(x), from which the server learns whether is in and nothing else.
It is known that , the class of problems that admit statistically private randomized encoding with polynomial-time client and computationally-unbounded server, is contained in the class of problems that have statistical zero-knowledge proof. However, the exact relation between these two classes, and, in particular, the possibility of equivalence was left as an open problem.
In this paper, we explore the relationship between \SRE and \SZK, and derive the following results:
* In a non-uniform setting, statistical randomized encoding with one-side privacy () is equivalent to non-interactive statistical zero-knowledge (). These variants were studied in the past as natural relaxation/strengthening of the original notions. Our theorem shows that proving is equivalent to showing that and . The latter is a well-known open problem (Goldreich, Sahai, Vadhan, CRYPTO 1999).
* If is non-trivial (not in ), then infinitely-often one-way functions exist. The analog hypothesis for yields only \emph{auxiliary-input} one-way functions (Ostrovsky, Structure in Complexity Theory, 1991), which is believed to be a significantly weaker implication.
* If there exists an average-case hard language with \emph{perfect randomized encoding}, then collision-resistance hash functions (CRH) exist. Again, a similar assumption for implies only constant-round statistically-hiding commitments, a primitive which seems weaker than CRH.
We believe that our results sharpen the relationship between and and illuminates the core differences between these two classes
Minimum Circuit Size, Graph Isomorphism, and Related Problems
We study the computational power of deciding whether a given truth-table can be described by a circuit of a given size (the Minimum Circuit Size Problem, or MCSP for short), and of the variant denoted MKTP where circuit size is replaced by a polynomially-related Kolmogorov measure. All prior reductions from supposedly-intractable problems to MCSP / MKTP hinged on the power of MCSP / MKTP to distinguish random distributions from distributions produced by hardness-based pseudorandom generator constructions. We develop a fundamentally different approach inspired by the well-known interactive proof system for the complement of Graph Isomorphism (GI). It yields a randomized reduction with zero-sided error from GI to MKTP. We generalize the result and show that GI can be replaced by any isomorphism problem for which the underlying group satisfies some elementary properties. Instantiations include Linear Code Equivalence, Permutation Group Conjugacy, and Matrix Subspace Conjugacy. Along the way we develop encodings of isomorphism classes that are efficiently decodable and achieve compression that is at or near the information-theoretic optimum; those encodings may be of independent interest
Quantum state testing beyond the polarizing regime and quantum triangular discrimination
The complexity class Quantum Statistical Zero-Knowledge ()
captures computational difficulties of the time-bounded quantum state testing
problem with respect to the trace distance, known as the Quantum State
Distinguishability Problem (QSDP) introduced by Watrous (FOCS 2002). However,
QSDP is in merely within the constant polarizing regime,
similar to its classical counterpart shown by Sahai and Vadhan (JACM 2003) due
to the polarization lemma (error reduction for SDP).
Recently, Berman, Degwekar, Rothblum, and Vasudevan (TCC 2019) extended the
containment for SDP beyond the polarizing regime via the
time-bounded distribution testing problems with respect to the triangular
discrimination and the Jensen-Shannon divergence. Our work introduces proper
quantum analogs for these problems by defining quantum counterparts for
triangular discrimination. We investigate whether the quantum analogs behave
similarly to their classical counterparts and examine the limitations of
existing approaches to polarization regarding quantum distances. These new
-complete problems improve containments for QSDP
beyond the polarizing regime and establish a simple -hardness
for the quantum entropy difference problem (QEDP) defined by Ben-Aroya,
Schwartz, and Ta-Shma (ToC 2010). Furthermore, we prove that QSDP with some
exponentially small errors is in , while the same problem without
error is in .Comment: 31 pages. v3: added a simple QSZK-hardness proof for QEDP, updated a
correct version of Theorem 5.1(2), and improved presentation. v2: minor
change
A study of statistical zero-knowledge proofs
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1999.Includes bibliographical references (p. 181-190).by Salil Pravin Vadhan.Ph.D
On the impossibility of entropy reversal, and its application to zero-knowledge proofs
Zero knowledge proof systems have been widely studied in cryptography. In the statistical setting, two classes of proof systems studied are Statistical Zero Knowledge (SZK) and Non-Interactive Statistical Zero Knowledge (NISZK), where the difference is that in NISZK only very limited communication is allowed between the verifier and the prover. It is an open problem whether these two classes are in fact equal. In this paper, we rule out efficient black box reductions between SZK and NISZK.
We achieve this by studying algorithms which can reverse the entropy of a function. The problem of estimating the entropy of a circuit is complete for NISZK. Hence, reversing the entropy of a function is equivalent to a black box reduction of NISZK to its complement, which is known to be equivalent to a black box reduction of SZK to NISZK [Goldreich et al, CRYPTO 1999]. We show that any such black box algorithm incurs an exponential loss of parameters, and hence cannot be implemented efficiently