217 research outputs found

    Learning with Errors is easy with quantum samples

    Full text link
    Learning with Errors is one of the fundamental problems in computational learning theory and has in the last years become the cornerstone of post-quantum cryptography. In this work, we study the quantum sample complexity of Learning with Errors and show that there exists an efficient quantum learning algorithm (with polynomial sample and time complexity) for the Learning with Errors problem where the error distribution is the one used in cryptography. While our quantum learning algorithm does not break the LWE-based encryption schemes proposed in the cryptography literature, it does have some interesting implications for cryptography: first, when building an LWE-based scheme, one needs to be careful about the access to the public-key generation algorithm that is given to the adversary; second, our algorithm shows a possible way for attacking LWE-based encryption by using classical samples to approximate the quantum sample state, since then using our quantum learning algorithm would solve LWE

    Affine Determinant Programs: A Framework for Obfuscation and Witness Encryption

    Get PDF
    An affine determinant program ADP: {0,1}^n → {0,1} is specified by a tuple (A,B_1,...,B_n) of square matrices over F_q and a function Eval: F_q → {0,1}, and evaluated on x \in {0,1}^n by computing Eval(det(A + sum_{i \in [n]} x_i B_i)). In this work, we suggest ADPs as a new framework for building general-purpose obfuscation and witness encryption. We provide evidence to suggest that constructions following our ADP-based framework may one day yield secure, practically feasible obfuscation. As a proof-of-concept, we give a candidate ADP-based construction of indistinguishability obfuscation (iO) for all circuits along with a simple witness encryption candidate. We provide cryptanalysis demonstrating that our schemes resist several potential attacks, and leave further cryptanalysis to future work. Lastly, we explore practically feasible applications of our witness encryption candidate, such as public-key encryption with near-optimal key generation

    Verifiable Registration-Based Encryption

    Get PDF
    In a recent work, Garg, Hajiabadi, Mahmoody, and Rahimi (TCC 18) introduced a new encryption framework, which they referred to as Registration-Based Encryption (RBE). The central motivation behind RBE was to provide a novel methodology for solving the well-known key-escrow problem in Identity-Based Encryption (IBE) systems. Informally, in an RBE system there is no private-key generator unlike IBE systems, but instead it is replaced with a public key accumulator. Every user in an RBE system samples its own public-secret key pair, and sends the public key to the accumulator for registration. The key accumulator has no secret state, and is only responsible for compressing all the registered user identity-key pairs into a short public commitment. Here the encryptor only requires the compressed parameters along with the target identity, whereas a decryptor requires supplementary key material along with the secret key associated with the registered public key. The initial construction by Garg et al. (TCC 18) based on standard assumptions only provided weak efficiency properties. In a follow-up work by Garg, Hajiabadi, Mahmoody, Rahimi, and Sekar (PKC 19), they gave an efficient RBE construction from standard assumptions. However, both these works considered the key accumulator to be honest which might be too strong an assumption in real-world scenarios. In this work, we initiate a formal study of RBE systems with malicious key accumulators. To that end, we introduce a strengthening of the RBE framework which we call Verifiable RBE (VRBE). A VRBE system additionally gives the users an extra capability to obtain short proofs from the key accumulator proving correct (and unique) registration for every registered user as well as proving non-registration for any yet unregistered identity. We construct VRBE systems which provide succinct proofs of registration and non-registration from standard assumptions (such as CDH, Factoring, LWE). Our proof systems also naturally allow a much more efficient audit process which can be perfomed by any non-participating third party as well. A by-product of our approach is that we provide a more efficient RBE construction than that provided in the prior work of Garg et al. (PKC 19). And, lastly we initiate a study on extension of VRBE to a wider range of access and trust structures

    Sanitization of FHE ciphertexts

    Get PDF
    By definition, fully homomorphic encryption (FHE) schemes support homomorphic decryption, and all known FHE constructions are bootstrapped from a Somewhat Homomorphic Encryption (SHE) scheme via this technique. Additionally, when a public key is provided, ciphertexts are also re-randomizable, e.g., by adding to them fresh encryptions of 0. From those two operations we devise an algorithm to sanitize a ciphertext, by making its distribution canonical. In particular, the distribution of the ciphertext does not depend on the circuit that led to it via homomorphic evaluation, thus providing circuit privacy in the honest-but-curious model. Unlike the previous approach based on noise flooding, our approach does not degrade much the security/efficiency trade-off of the underlying FHE. The technique can be applied to all lattice-based FHE proposed so far, without substantially affecting their concrete parameters

    Impact of the modulus switching technique on some attacks against learning problems

    Get PDF
    © The Institution of Engineering and Technology 2019. The modulus switching technique has been used in some cryptographic applications as well as in cryptanalysis. For cryptanalysis against the learning with errors (LWE) problem and the learning with rounding (LWR) problem, it seems that one does not know whether the technique is really useful or not. This work supplies a complete view of the impact of this technique on the decoding attack, the dual attack and the primal attack against both LWE and LWR. For each attack, the authors give the optimal formula for the switching modulus. The formulas get involved the number of LWE/LWR samples, which differs from the known formula in the literature. They also attain the corresponding sufficient conditions saying when one should utilise the technique. Surprisingly, restricted to the LWE/LWR problem that the secret vector is much shorter than the error vector, they also show that performing the modulus switching before using the so-called rescaling technique in the dual attack and the primal attack make these attacks worse than only exploiting the rescaling technique as reported by Bai and Galbraith at the Australasian conference on information security and privacy (ACISP) 2014 conference. As an application, they theoretically assess the influence of the modulus switching on the LWE/LWR-based second round NIST PQC submissions

    Optimal Key Consensus in Presence of Noise

    Get PDF
    In this work, we abstract some key ingredients in previous key exchange protocols based on LWE and its variants, by introducing and formalizing the building tool, referred to as key consensus (KC) and its asymmetric variant AKC. KC and AKC allow two communicating parties to reach consensus from close values obtained by some secure information exchange. We then discover upper bounds on parameters for any KC and AKC. KC and AKC are fundamental to lattice based cryptography, in the sense that a list of cryptographic primitives based on LWE and its variants (including key exchange, public-key encryption, and more) can be modularly constructed from them. As a conceptual contribution, this much simplifies the design and analysis of these cryptosystems in the future. We then design and analyze both general and highly practical KC and AKC schemes, which are referred to as OKCN and AKCN respectively for presentation simplicity. Based on KC and AKC, we present generic constructions of key exchange (KE) from LWR, LWE, RLWE and MLWE. The generic construction allows versatile instantiations with our OKCN and AKCN schemes, for which we elaborate on evaluating and choosing the concrete parameters in order to achieve a well-balanced performance among security, computational cost, bandwidth efficiency, error rate, and operation simplicity

    Bringing Theory Closer to Practice in Post-quantum and Leakage-resilient Cryptography

    Get PDF
    Modern cryptography pushed forward the need of having provable security. Whereas ancient cryptography was only relying on heuristic assumptions and the secrecy of the designs, nowadays researchers try to make the security of schemes to rely on mathematical problems which are believed hard to solve. When doing these proofs, the capabilities of potential adversaries are modeled formally. For instance, the black-box model assumes that an adversary does not learn anything from the inner-state of a construction. While this assumption makes sense in some practical scenarios, it was shown that one can sometimes learn some information by other means, e.g., by timing how long the computation take. In this thesis, we focus on two different areas of cryptography. In both parts, we take first a theoretical point of view to obtain a result. We try then to adapt our results so that they are easily usable for implementers and for researchers working in practical cryptography. In the first part of this thesis, we take a look at post-quantum cryptography, i.e., at cryptographic primitives that are believed secure even in the case (reasonably big) quantum computers are built. We introduce HELEN, a new public-key cryptosystem based on the hardness of the learning from parity with noise problem (LPN). To make our results more concrete, we suggest some practical instances which make the system easily implementable. As stated above, the design of cryptographic primitives usually relies on some well-studied hard problems. However, to suggest concrete parameters for these primitives, one needs to know the precise complexity of algorithms solving the underlying hard problem. In this thesis, we focus on two recent hard-problems that became very popular in post-quantum cryptography: the learning with error (LWE) and the learning with rounding problem (LWR). We introduce a new algorithm that solves both problems and provide a careful complexity analysis so that these problems can be used to construct practical cryptographic primitives. In the second part, we look at leakage-resilient cryptography which studies adversaries able to get some side-channel information from a cryptographic primitive. In the past, two main disjoint models were considered. The first one, the threshold probing model, assumes that the adversary can put a limited number of probes in a circuit. He then learns all the values going through these probes. This model was used mostly by theoreticians as it allows very elegant and convenient proofs. The second model, the noisy-leakage model, assumes that every component of the circuit leaks but that the observed signal is noisy. Typically, some Gaussian noise is added to it. According to experiments, this model depicts closely the real behaviour of circuits. Hence, this model is cherished by the practical cryptographic community. In this thesis, we show that making a proof in the first model implies a proof in the second model which unifies the two models and reconciles both communities. We then look at this result with a more practical point-of-view. We show how it can help in the process of evaluating the security of a chip based solely on the more standard mutual information metric

    Dimension-Preserving Reductions from LWE to LWR

    Get PDF
    The Learning with Rounding (LWR) problem was first introduced by Banerjee, Peikert, and Rosen (Eurocrypt 2012) as a \emph{derandomized} form of the standard Learning with Errors (LWE) problem. The original motivation of LWR was as a building block for constructing efficient, low-depth pseudorandom functions on lattices. It has since been used to construct reusable computational extractors, lossy trapdoor functions, and deterministic encryption. In this work we show two (incomparable) dimension-preserving reductions from LWE to LWR in the case of a \emph{polynomial-size modulus}. Prior works either required a superpolynomial modulus qq, or lost at least a factor log⁥(q)\log(q) in the dimension of the reduction. A direct consequence of our improved reductions is an improvement in parameters (i.e. security and efficiency) for each of the known applications of poly-modulus LWR. Our results directly generalize to the ring setting. Indeed, our formal analysis is performed over ``module lattices,\u27\u27 as defined by Langlois and Stehlé (DCC 2015), which generalize both the general lattice setting of LWE and the ideal lattice setting of RLWE as the single notion M-LWE. We hope that taking this broader perspective will lead to further insights of independent interest
    • 

    corecore