33 research outputs found

    Robustness of the Learning with Errors Assumption

    Get PDF
    Starting with the work of Ishai-Sahai-Wagner and Micali-Reyzin, a new goal has been set within the theory of cryptography community, to design cryptographic primitives that are secure against large classes of side-channel attacks. Recently, many works have focused on designing various cryptographic primitives that are robust (retain security) even when the secret key is “leaky”, under various intractability assumptions. In this work we propose to take a step back and ask a more basic question: which of our cryptographic assumptions (rather than cryptographic schemes) are robust in presence of leakage of their underlying secrets? Our main result is that the hardness of the learning with error (LWE) problem implies its hardness with leaky secrets. More generally, we show that the standard LWE assumption implies that LWE is secure even if the secret is taken from an arbitrary distribution with sufficient entropy, and even in the presence of hard-to-invert auxiliary inputs. We exhibit various applications of this result. 1. Under the standard LWE assumption, we construct a symmetric-key encryption scheme that is robust to secret key leakage, and more generally maintains security even if the secret key is taken from an arbitrary distribution with sufficient entropy (and even in the presence of hard-to-invert auxiliary inputs). 2. Under the standard LWE assumption, we construct a (weak) obfuscator for the class of point functions with multi-bit output. We note that in most schemes that are known to be robust to leakage, the parameters of the scheme depend on the maximum leakage the system can tolerate, and hence the efficiency degrades with the maximum anticipated leakage, even if no leakage occurs at all! In contrast, the fact that we rely on a robust assumption allows us to construct a single symmetric-key encryption scheme, with parameters that are independent of the anticipated leakage, that is robust to any leakage (as long as the secret key has sufficient entropy left over). Namely, for any k < n (where n is the size of the secret key), if the secret key has only entropy k, then the security relies on the LWE assumption with secret size roughly k

    Information retrieval of mass encrypted data over multimedia networking with N-level vector model-based relevancy ranking

    Get PDF
    With an explosive growth in the deployment of networked applications over the Internet, searching the encrypted information that the user needs becomes increasingly important. However, the information search precision is quite low when using Vector space model for mass information retrieval, because long documents having poor similarity values are poorly represented in the vector space model and the order in which the terms appear in the document is lost in the vector space representation with intuitive weighting. To address the problems, this study proposed an N-level vector model (NVM)-based relevancy ranking scheme with an introduction of a new formula of the term weighting, taking into account the location of the feature term in the document to describe the content of the document properly, investigated into ways of ranking the encrypted documents using the proposed scheme, and conducted realistic simulation of information retrieval of mass encrypted data over multimedia networking. Results indicated that the timing of the index building, the most costing part of the relevancy ranking scheme, increased with the increase in both the document size and the multimedia content of the document being searched, which is in agreement with the expected. Performance evaluation demonstrated that our specially designed NVM-based encrypted information retrieval system is effective in ranking the encrypted documents transmitted over multimedia networks with large recall ratio and great retrieval precision

    Obfuscating Conjunctions under Entropic Ring LWE

    Get PDF
    We show how to securely obfuscate conjunctions, which are functions f(x[subscript 1], . . . , x[subscript n]) = ∧[subscript i∈I] y[superscript i] where I ⊆ [n] and each literal y[subscript i] is either just x[subscript i] or ¬x[subscript i] e.g., f(x[subscript 1], . . . , x_n) = x[subscript 1] ⊆ ¬ x[subscript 3] ⊆ ¬ x[subscript 7] · · · ⊆ x[subscript n−1]. Whereas prior work of Brakerski and Rothblum (CRYPTO 2013) showed how to achieve this using a non-standard object called cryptographic multilinear maps, our scheme is based on an “entropic” variant of the Ring Learning with Errors (Ring LWE) assumption. As our core tool, we prove that hardness assumptions on the recent multilinear map construction of Gentry, Gorbunov and Halevi (TCC 2015) can be established based on entropic Ring LWE. We view this as a first step towards proving the security of additional multilinear map based constructions, and in particular program obfuscators, under standard assumptions. Our scheme satisfies virtual black box (VBB) security, meaning that the obfuscated program reveals nothing more than black-box access to f as an oracle, at least as long as (essentially) the conjunction is chosen from a distribution having sufficient entropy

    Classical Verification of Quantum Computations

    Get PDF
    We present the first protocol allowing a classical computer to interactively verify the result of an efficient quantum computation. We achieve this by constructing a measurement protocol, which enables a classical verifier to use a quantum prover as a trusted measurement device. The protocol forces the prover to behave as follows: the prover must construct an n qubit state of his choice, measure each qubit in the Hadamard or standard basis as directed by the verifier, and report the measurement results to the verifier. The soundness of this protocol is enforced based on the assumption that the learning with errors problem is computationally intractable for efficient quantum machines

    Provably Secure Group Signature Schemes from Code-Based Assumptions

    Full text link
    We solve an open question in code-based cryptography by introducing two provably secure group signature schemes from code-based assumptions. Our basic scheme satisfies the CPA-anonymity and traceability requirements in the random oracle model, assuming the hardness of the McEliece problem, the Learning Parity with Noise problem, and a variant of the Syndrome Decoding problem. The construction produces smaller key and signature sizes than the previous group signature schemes from lattices, as long as the cardinality of the underlying group does not exceed 2242^{24}, which is roughly comparable to the current population of the Netherlands. We develop the basic scheme further to achieve the strongest anonymity notion, i.e., CCA-anonymity, with a small overhead in terms of efficiency. The feasibility of two proposed schemes is supported by implementation results. Our two schemes are the first in their respective classes of provably secure groups signature schemes. Additionally, the techniques introduced in this work might be of independent interest. These are a new verifiable encryption protocol for the randomized McEliece encryption and a novel approach to design formal security reductions from the Syndrome Decoding problem.Comment: Full extension of an earlier work published in the proceedings of ASIACRYPT 201

    Efficient Chosen-Ciphtertext Secure Public Key Encryption Scheme From Lattice Assumption

    Full text link

    A Cryptographic Test of Quantumness and Certifiable Randomness from a Single Quantum Device

    Get PDF
    We give a protocol for producing certifiable randomness from a single untrusted quantum device that is polynomial-time bounded. The randomness is certified to be statistically close to uniform from the point of view of any computationally unbounded quantum adversary, that may share entanglement with the quantum device. The protocol relies on the existence of post-quantum secure trapdoor claw-free functions, and introduces a new primitive for constraining the power of an untrusted quantum device. We then show how to construct this primitive based on the hardness of the learning with errors (LWE) problem. The randomness protocol can also be used as the basis for an efficiently verifiable "quantum supremacy" proposal, thus answering an outstanding challenge in the field

    A Cryptographic Test of Quantumness and Certifiable Randomness from a Single Quantum Device

    Get PDF
    We give a protocol for producing certifiable randomness from a single untrusted quantum device that is polynomial-time bounded. The randomness is certified to be statistically close to uniform from the point of view of any computationally unbounded quantum adversary, that may share entanglement with the quantum device. The protocol relies on the existence of post-quantum secure trapdoor claw-free functions, and introduces a new primitive for constraining the power of an untrusted quantum device. We show how to construct this primitive based on the hardness of the learning with errors (LWE) problem, and prove that it has a crucial adaptive hardcore bit property. The randomness protocol can be used as the basis for an efficiently verifiable "test of quantumness", thus answering an outstanding challenge in the field.Comment: 45 page

    Two-party authenticated key exchange protocol using lattice-based cryptography

    Get PDF
    Authenticated key exchange (AKE) protocol is an important cryptographic primitive that assists communicating entities, who are communicating over an insecure network, to establish a shared session key to be used for protecting their subsequent communication. Lattice-based cryptographic primitives are believed to provide resilience against attacks from quantum computers. An efficient AKE protocol with smaller module over ideal lattices is constructed in this paper, which nicely inherits the design idea of the excellent high performance secure Diffie-Hellman protocol. Under the hard assumption of ring learning with errors (RLWE) hard assumption, the security of the proposed protocol is proved in the Bellare-Rogaway model

    Classical Verification of Quantum Computations

    Get PDF
    We present the first protocol allowing a classical computer to interactively verify the result of an efficient quantum computation. We achieve this by constructing a measurement protocol, which enables a classical verifier to use a quantum prover as a trusted measurement device. The protocol forces the prover to behave as follows: the prover must construct an n qubit state of his choice, measure each qubit in the Hadamard or standard basis as directed by the verifier, and report the measurement results to the verifier. The soundness of this protocol is enforced based on the assumption that the learning with errors problem is computationally intractable for efficient quantum machines
    corecore