235 research outputs found
Zero-Knowledge Password Policy Check from Lattices
Passwords are ubiquitous and most commonly used to authenticate users when
logging into online services. Using high entropy passwords is critical to
prevent unauthorized access and password policies emerged to enforce this
requirement on passwords. However, with current methods of password storage,
poor practices and server breaches have leaked many passwords to the public. To
protect one's sensitive information in case of such events, passwords should be
hidden from servers. Verifier-based password authenticated key exchange,
proposed by Bellovin and Merrit (IEEE S\&P, 1992), allows authenticated secure
channels to be established with a hash of a password (verifier). Unfortunately,
this restricts password policies as passwords cannot be checked from their
verifier. To address this issue, Kiefer and Manulis (ESORICS 2014) proposed
zero-knowledge password policy check (ZKPPC). A ZKPPC protocol allows users to
prove in zero knowledge that a hash of the user's password satisfies the
password policy required by the server. Unfortunately, their proposal is not
quantum resistant with the use of discrete logarithm-based cryptographic tools
and there are currently no other viable alternatives. In this work, we
construct the first post-quantum ZKPPC using lattice-based tools. To this end,
we introduce a new randomised password hashing scheme for ASCII-based passwords
and design an accompanying zero-knowledge protocol for policy compliance.
Interestingly, our proposal does not follow the framework established by Kiefer
and Manulis and offers an alternate construction without homomorphic
commitments. Although our protocol is not ready to be used in practice, we
think it is an important first step towards a quantum-resistant
privacy-preserving password-based authentication and key exchange system
Commitment and Oblivious Transfer in the Bounded Storage Model with Errors
The bounded storage model restricts the memory of an adversary in a
cryptographic protocol, rather than restricting its computational power, making
information theoretically secure protocols feasible. We present the first
protocols for commitment and oblivious transfer in the bounded storage model
with errors, i.e., the model where the public random sources available to the
two parties are not exactly the same, but instead are only required to have a
small Hamming distance between themselves. Commitment and oblivious transfer
protocols were known previously only for the error-free variant of the bounded
storage model, which is harder to realize
Finding Collisions in Interactive Protocols -- A Tight Lower Bound on the Round Complexity of Statistically-Hiding Commitments
We study the round complexity of various cryptographic protocols. Our main result is a tight lower bound on the round complexity of any fully-black-box construction of a statistically-hiding commitment scheme from one-way permutations, and even from trapdoor permutations. This lower bound matches the round complexity of the statistically-hiding commitment scheme due to Naor, Ostrovsky, Venkatesan and Yung (CRYPTO \u2792). As a corollary, we derive similar tight lower bounds for several other cryptographic protocols, such as single-server private information retrieval, interactive hashing, and oblivious transfer that guarantees statistical security for one of the parties.
Our techniques extend the collision-finding oracle due to Simon (EUROCRYPT \u2798) to the setting of interactive protocols (our extension also implies an alternative proof for the main property of the original oracle). In addition, we substantially extend the reconstruction paradigm of Gennaro and Trevisan (FOCS \u2700). In both cases, our extensions are quite delicate and may be found useful in proving additional black-box separation results
Recommended from our members
Unconditional Relationships within Zero Knowledge
Zero-knowledge protocols enable one party, called a prover, to "convince" another party, called a verifier, the validity of a mathematical statement such that the verifier "learns nothing" other than the fact that the proven statement is true. The different ways of formulating the terms "convince" and "learns nothing" gives rise to four classes of languages having zero-knowledge protocols, which are: statistical zero-knowledge proof systems, computational zero-knowledge proof systems, statistical zero-knowledge argument systems, and computational zero-knowledge argument systems.
We establish complexity-theoretic characterization of the classes of languages in NP having zero-knowledge argument systems. Using these characterizations, we show that for languages in NP:
-- Instance-dependent commitment schemes are necessary and sufficient for zero-knowledge protocols. Instance-dependent commitment schemes for a given language are commitment schemes that can depend on the instance of the language, and where the hiding and binding properties are required to hold only on the YES and NO instances of the language, respectively.
-- Computational zero knowledge and computational soundness (a property held by argument systems) are symmetric properties. Namely, we show that the class of languages in NP intersect co-NP having zero-knowledge arguments is closed under complement, and that a language in NP has a statistical zero-knowledge **argument** system if and only if its complement has a **computational** zero-knowledge proof system.
-- A method of transforming any zero-knowledge protocol that is secure only against an honest verifier that follows the prescribed protocol into one that is secure against malicious verifiers. In addition, our transformation gives us protocols with desirable properties like having public coins, being black-box simulatable, and having an efficient prover.
The novelty of our results above is that they are **unconditional**, meaning that they do not rely on any unproven complexity assumptions such as the existence of one-way functions. Moreover, in establishing our complexity-theoretic characterizations, we give the first construction of statistical zero-knowledge argument systems for NP based on any one-way function
Proofs of Quantumness from Trapdoor Permutations
Assume that Alice can do only classical probabilistic polynomial-time computing while Bob can do quantum polynomial-time computing. Alice and Bob communicate over only classical channels, and finally Bob gets a state with some bit strings and . Is it possible that Alice can know but Bob cannot? Such a task, called {\it remote state preparations}, is indeed possible under some complexity assumptions, and is bases of many quantum cryptographic primitives such as proofs of quantumness, (classical-client) blind quantum computing, (classical) verifications of quantum computing, and quantum money. A typical technique to realize remote state preparations is to use 2-to-1 trapdoor collision resistant hash functions: Alice sends a 2-to-1 trapdoor collision resistant hash function to Bob, and Bob evaluates it coherently, i.e., Bob generates . Bob measures the second register to get the measurement result , and sends to Alice. Bob\u27s post-measurement state is , where . With the trapdoor, Alice can learn from , but due to the collision resistance, Bob cannot. This Alice\u27s advantage can be leveraged to realize the quantum cryptographic primitives listed above. It seems that the collision resistance is essential here. In this paper, surprisingly, we show that the collision resistance is not necessary for a restricted case: we show that (non-verifiable) remote state preparations of secure against {\it classical} probabilistic polynomial-time Bob can be constructed from classically-secure (full-domain) trapdoor permutations. Trapdoor permutations are not likely to imply the collision resistance, because black-box reductions from collision-resistant hash functions to trapdoor permutations are known to be impossible. As an application of our result, we construct proofs of quantumness from classically-secure (full-domain) trapdoor permutations
Multi Collision Resistant Hash Functions and their Applications
Collision resistant hash functions are functions that shrink their input, but for which it is computationally infeasible to find a collision, namely two strings that hash to the same value (although collisions are abundant).
In this work we study multi-collision resistant hash functions (MCRH) a natural relaxation of collision resistant hash functions in which it is difficult to find a t-way collision (i.e., t strings that hash to the same value) although finding (t-1)-way collisions could be easy. We show the following:
1. The existence of MCRH follows from the average case hardness of a variant of the Entropy Approximation problem. The goal in the entropy approximation problem (Goldreich, Sahai and Vadhan, CRYPTO \u2799) is to distinguish circuits whose output distribution has high entropy from those having low entropy.
2. MCRH imply the existence of constant-round statistically hiding (and computationally binding) commitment schemes. As a corollary, using a result of Haitner et-al (SICOMP, 2015), we obtain a blackbox separation of MCRH from any one-way permutation
Quantum Advantage from One-Way Functions
Showing quantum advantage based on weaker and standard classical complexity assumptions is one of the most important goals in quantum information science. In this paper, we demonstrate quantum advantage with several basic assumptions, specifically based on only the existence of classically-secure one-way functions. We introduce inefficient-verifier proofs of quantumness (IV-PoQ), and construct it from statistically-hiding and computationally-binding classical bit commitments. IV-PoQ is an interactive protocol between a verifier and a quantum polynomial-time prover consisting of two phases. In the first phase, the verifier is classical probabilistic polynomial-time, and it interacts with the quantum polynomial-time prover over a classical channel. In the second phase, the verifier becomes inefficient, and makes its decision based on the transcript of the first phase. If the quantum prover is honest, the inefficient verifier accepts with high probability, but any classical probabilistic polynomial-time malicious prover only has a small probability of being accepted by the inefficient verifier. In our construction, the inefficient verifier can be a classical deterministic polynomial-time algorithm that queries an oracle. Our construction demonstrates the following results based on the known constructions of statistically-hiding and computationally-binding commitments from one-way functions or distributional collision-resistant hash functions:
(1) If one-way functions exist, then IV-PoQ exist.
(2) If distributional collision-resistant hash functions exist (which exist if hard-on-average problems in exist), then constant-round IV-PoQ exist.
We also demonstrate quantum advantage based on worst-case-hard assumptions. We define auxiliary-input IV-PoQ (AI-IV-PoQ) that only require that for any malicious prover, there exist infinitely many auxiliary inputs under which the prover cannot cheat. We construct AI-IV-PoQ from an auxiliary-input version of commitments in a similar way, showing that
(1) If auxiliary-input one-way functions exist (which exist if ), then AI-IV-PoQ exist.
(2) If auxiliary-input collision-resistant hash functions exist (which is equivalent to ) or , then constant-round AI-IV-PoQ exist.
Finally, we also show that some variants of PoQ can be constructed from quantum-evaluation one-way functions (QE-OWFs), which are similar to classically-secure classical one-way functions except that the evaluation algorithm is not classical but quantum. QE-OWFs appear to be weaker than classically-secure classical one-way functions
Distributional Collision Resistance Beyond One-Way Functions
Distributional collision resistance is a relaxation of collision resistance that only requires that it is hard to sample a collision (x,y) where x is uniformly random and y is uniformly random conditioned on colliding with x. The notion lies between one-wayness and collision resistance, but its exact power is still not well-understood. On one hand, distributional collision resistant hash functions cannot be built from one-way functions in a black-box way, which may suggest that they are stronger. On the other hand, so far, they have not yielded any applications beyond one-way functions.
Assuming distributional collision resistant hash functions, we construct constant-round statistically hiding commitment scheme. Such commitments are not known based on one-way functions and are impossible to obtain from one-way functions in a black-box way. Our construction relies on the reduction from inaccessible entropy generators to statistically hiding commitments by Haitner et al. (STOC \u2709). In the converse direction, we show that two-message statistically hiding commitments imply distributional collision resistance, thereby establishing a loose equivalence between the two notions.
A corollary of the first result is that constant-round statistically hiding commitments are implied by average-case hardness in the class SZK (which is known to imply distributional collision resistance). This implication seems to be folklore, but to the best of our knowledge has not been proven explicitly. We provide yet another proof of this implication, which is arguably more direct than the one going through distributional collision resistance
- …