59 research outputs found

    Cryptography with Auxiliary Input and Trapdoor from Constant-Noise LPN

    Get PDF
    Dodis, Kalai and Lovett (STOC 2009) initiated the study of the Learning Parity with Noise (LPN) problem with (static) exponentially hard-to-invert auxiliary input. In particular, they showed that under a new assumption (called Learning Subspace with Noise) the above is quasi-polynomially hard in the high (polynomially close to uniform) noise regime. Inspired by the ``sampling from subspace\u27\u27 technique by Yu (eprint 2009 / 467) and Goldwasser et al. (ITCS 2010), we show that standard LPN can work in a mode (reducible to itself) where the constant-noise LPN (by sampling its matrix from a random subspace) is robust against sub-exponentially hard-to-invert auxiliary input with comparable security to the underlying LPN. Plugging this into the framework of [DKL09], we obtain the same applications as considered in [DKL09] (i.e., CPA/CCA secure symmetric encryption schemes, average-case obfuscators, reusable and robust extractors) with resilience to a more general class of leakages, improved efficiency and better security under standard assumptions. As a main contribution, under constant-noise LPN with certain sub-exponential hardness (i.e., 2ω(n1/2)2^{\omega(n^{1/2})} for secret size nn) we obtain a variant of the LPN with security on poly-logarithmic entropy sources, which in turn implies CPA/CCA secure public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. Prior to this, basing PKE and OT on constant-noise LPN had been an open problem since Alekhnovich\u27s work (FOCS 2003)

    Cryptography based on the Hardness of Decoding

    Get PDF
    This thesis provides progress in the fields of for lattice and coding based cryptography. The first contribution consists of constructions of IND-CCA2 secure public key cryptosystems from both the McEliece and the low noise learning parity with noise assumption. The second contribution is a novel instantiation of the lattice-based learning with errors problem which uses uniform errors

    A Framework for Efficient Adaptively Secure Composable Oblivious Transfer in the ROM

    Get PDF
    Oblivious Transfer (OT) is a fundamental cryptographic protocol that finds a number of applications, in particular, as an essential building block for two-party and multi-party computation. We construct a round-optimal (2 rounds) universally composable (UC) protocol for oblivious transfer secure against active adaptive adversaries from any OW-CPA secure public-key encryption scheme with certain properties in the random oracle model (ROM). In terms of computation, our protocol only requires the generation of a public/secret-key pair, two encryption operations and one decryption operation, apart from a few calls to the random oracle. In~terms of communication, our protocol only requires the transfer of one public-key, two ciphertexts, and three binary strings of roughly the same size as the message. Next, we show how to instantiate our construction under the low noise LPN, McEliece, QC-MDPC, LWE, and CDH assumptions. Our instantiations based on the low noise LPN, McEliece, and QC-MDPC assumptions are the first UC-secure OT protocols based on coding assumptions to achieve: 1) adaptive security, 2) optimal round complexity, 3) low communication and computational complexities. Previous results in this setting only achieved static security and used costly cut-and-choose techniques.Our instantiation based on CDH achieves adaptive security at the small cost of communicating only two more group elements as compared to the gap-DH based Simplest OT protocol of Chou and Orlandi (Latincrypt 15), which only achieves static security in the ROM

    Two-Round Adaptively Secure MPC from Isogenies, LPN, or CDH

    Get PDF
    We present a new framework for building round-optimal (two-round) adaptivelyadaptively secure MPC. We show that a relatively weak notion of OT that we call indistinguishability OT with receiver oblivious sampleabilityindistinguishability \ OT \ with \ receiver \ oblivious \ sampleability (r-iOT) is enough to build two-round, adaptively secure MPC against maliciousmalicious adversaries in the CRS model. We then show how to construct r-iOT from CDH, LPN, or isogeny-based assumptions that can be viewed as group actions (such as CSIDH and CSI-FiSh). This yields the first constructions of two-round adaptively secure MPC against malicious adversaries from CDH, LPN, or isogeny-based assumptions. We further extend our non-isogeny results to the plain model, achieving (to our knowledge) the first construction of two-round adaptively secure MPC against semi-honest adversaries in the plain model from LPN. Our results allow us to build a two-round adaptively secure MPC against malicious adversaries from essentially all of the well-studied assumptions in cryptography. In addition, our constructions from isogenies or LPN provide the first post-quantum alternatives to LWE-based constructions for round-optimal adaptively secure MPC. Along the way, we show that r-iOT also implies non-committing encryption(NCE), thereby yielding the first constructions of NCE from isogenies or LPN

    LNCS

    Get PDF
    We extend a commitment scheme based on the learning with errors over rings (RLWE) problem, and present efficient companion zeroknowledge proofs of knowledge. Our scheme maps elements from the ring (or equivalently, n elements fro

    Publicly-Verifiable Deletion via Target-Collapsing Functions

    Full text link
    We build quantum cryptosystems that support publicly-verifiable deletion from standard cryptographic assumptions. We introduce target-collapsing as a weakening of collapsing for hash functions, analogous to how second preimage resistance weakens collision resistance; that is, target-collapsing requires indistinguishability between superpositions and mixtures of preimages of an honestly sampled image. We show that target-collapsing hashes enable publicly-verifiable deletion (PVD), proving conjectures from [Poremba, ITCS'23] and demonstrating that the Dual-Regev encryption (and corresponding fully homomorphic encryption) schemes support PVD under the LWE assumption. We further build on this framework to obtain a variety of primitives supporting publicly-verifiable deletion from weak cryptographic assumptions, including: - Commitments with PVD assuming the existence of injective one-way functions, or more generally, almost-regular one-way functions. Along the way, we demonstrate that (variants of) target-collapsing hashes can be built from almost-regular one-way functions. - Public-key encryption with PVD assuming trapdoored variants of injective (or almost-regular) one-way functions. We also demonstrate that the encryption scheme of [Hhan, Morimae, and Yamakawa, Eurocrypt'23] based on pseudorandom group actions has PVD. - XX with PVD for X∈{X \in \{attribute-based encryption, quantum fully-homomorphic encryption, witness encryption, time-revocable encryption}\}, assuming XX and trapdoored variants of injective (or almost-regular) one-way functions.Comment: 52 page

    Securing Systems with Scarce Entropy: LWE-Based Lossless Computational Fuzzy Extractor for the IoT

    Get PDF
    With the advent of the Internet of Things, lightweight devices necessitate secure and cost-efficient key storage. Since traditional secure key storage is expensive, novel solutions have been developed based on the idea of deriving the key from noisy entropy sources. Such sources when combined with fuzzy extractors allow cryptographically strong key derivation. Information theoretic fuzzy extractors require large amounts of input entropy to account for entropy loss in the key extraction process. It has been shown by Fuller \textit{et al.}~(ASIACRYPT\u2713) that the entropy loss can be reduced if the requirement is relaxed to computational security based on the hardness of the Learning with Errors problem. Using this computational fuzzy extractor, we show how to construct a device-server authentication system providing outsider chosen perturbation security and pre-application robustness. We present the first implementation of a \emph{lossless} computational fuzzy extractor where the entropy of the source equals the entropy of the key on a constrained device. The implementation needs only 1.45KB of SRAM and 9.8KB of Flash memory on an 8-bit microcontroller. Furthermore, we also show how a device-server authentication system can be constructed and efficiently implemented in our system. We compare our implementation to existing work in terms of security, while achieving no entropy loss

    Cryptographic Assumptions: A Position Paper

    Get PDF
    The mission of theoretical cryptography is to define and construct provably secure cryptographic protocols and schemes. Without proofs of security, cryptographic constructs offer no guarantees whatsoever and no basis for evaluation and comparison. As most security proofs necessarily come in the form of a reduction between the security claim and an intractability assumption, such proofs are ultimately only as good as the assumptions they are based on. Thus, the complexity implications of every assumption we utilize should be of significant substance, and serve as the yard stick for the value of our proposals. Lately, the field of cryptography has seen a sharp increase in the number of new assumptions that are often complex to define and difficult to interpret. At times, these assumptions are hard to untangle from the constructions which utilize them. We believe that the lack of standards of what is accepted as a reasonable cryptographic assumption can be harmful to the credibility of our field. Therefore, there is a great need for {\em measures} according to which we classify and compare assumptions, as to which are {\it safe} and which are not. In this paper, we propose such a classification and review recently suggested assumptions in this light. This follows the footsteps of Naor (Crypto 2003). Our governing principle is relying on hardness assumptions that are independent of the cryptographic constructions

    Collision Resistant Hashing from Sub-exponential Learning Parity with Noise

    Get PDF
    The Learning Parity with Noise (LPN) problem has recently found many cryptographic applications such as authentication protocols, pseudorandom generators/functions and even asymmetric tasks including public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. It however remains a long-standing open problem whether LPN implies collision resistant hash (CRH) functions. Based on the recent work of Applebaum et al. (ITCS 2017), we introduce a general framework for constructing CRH from LPN for various parameter choices. We show that, just to mention a few notable ones, under any of the following hardness assumptions (for the two most common variants of LPN) 1) constant-noise LPN is 2n0.5+Ï”2^{n^{0.5+\epsilon}}-hard for any constant Ï”>0\epsilon>0; 2) constant-noise LPN is 2Ω(n/log⁥n)2^{\Omega(n/\log n)}-hard given q=poly(n)q=poly(n) samples; 3) low-noise LPN (of noise rate 1/n1/\sqrt{n}) is 2Ω(n/log⁥n)2^{\Omega(\sqrt{n}/\log n)}-hard given q=poly(n)q=poly(n) samples. there exists CRH functions with constant (or even poly-logarithmic) shrinkage, which can be implemented using polynomial-size depth-3 circuits with NOT, (unbounded fan-in) AND and XOR gates. Our technical route LPN→\rightarrowbSVP→\rightarrowCRH is reminiscent of the known reductions for the large-modulus analogue, i.e., LWE→\rightarrowSIS→\rightarrowCRH, where the binary Shortest Vector Problem (bSVP) was recently introduced by Applebaum et al. (ITCS 2017) that enables CRH in a similar manner to Ajtai\u27s CRH functions based on the Short Integer Solution (SIS) problem. Furthermore, under additional (arguably minimal) idealized assumptions such as small-domain random functions or random permutations (that trivially imply collision resistance), we still salvage a simple and elegant collision-resistance-preserving domain extender that is (asymptotically) more parallel and efficient than previously known. In particular, assume 2n0.5+Ï”2^{n^{0.5+\epsilon}}-hard constant-noise LPN or 2n0.25+Ï”2^{n^{0.25+\epsilon}}-hard low-noise LPN, we obtain a polynomially shrinking collision resistant hash function that evaluates in parallel only a single layer of small-domain random functions (or random permutations) and produces their XOR sum as output

    CCA Security and Trapdoor Functions via Key-Dependent-Message Security

    Get PDF
    We study the relationship among public-key encryption (PKE) satisfying indistinguishability against chosen plaintext attacks (IND-CPA security), that against chosen ciphertext attacks (IND-CCA security), and trapdoor functions (TDF). Specifically, we aim at finding a unified approach and some additional requirement to realize IND-CCA secure PKE and TDF based on IND-CPA secure PKE, and show the following two main results. As the first main result, we show how to achieve IND-CCA security via a weak form of key-dependent-message (KDM) security. More specifically, we construct an IND-CCA secure PKE scheme based on an IND-CPA secure PKE scheme and a secret-key encryption (SKE) scheme satisfying one-time KDM security with respect to projection functions (projection-KDM security). Projection functions are elementary functions with respect to which KDM security has been widely studied. Since the existence of projection-KDM secure PKE implies that of the above two building blocks, as a corollary of this result, we see that the existence of IND-CCA secure PKE is implied by that of projection-KDM secure PKE. As the second main result, we extend the above construction of IND-CCA secure PKE into that of TDF by additionally requiring a mild requirement for each building block. Our TDF satisfies adaptive one-wayness. We can instantiate our TDF based on a wide variety of computational assumptions. Especially, we obtain the first TDF (with adaptive one-wayness) based on the sub-exponential hardness of the constant-noise learning-parity-with-noise (LPN) problem. In addition, we show that by extending the above constructions, we can obtain PKE schemes satisfying advanced security notions under CCA, that is, optimal rate leakage-resilience under CCA and selective-opening security under CCA. As a result, we obtain the first PKE schemes satisfying these security notions based on the computational Diffie-Hellman (CDH) assumption or the low-noise LPN assumption
    • 

    corecore