108 research outputs found

    Lattice-based Signatures with Tight Adaptive Corruptions and More

    Get PDF
    We construct the first tightly secure signature schemes in the multi-user setting with adaptive corruptions from lattices. In stark contrast to the previous tight constructions whose security is solely based on number-theoretic assumptions, our schemes are based on the Learning with Errors (LWE) assumption which is supposed to be post-quantum secure. The security of our scheme is independent of the numbers of users and signing queries, and it is in the non-programmable random oracle model. Our LWE-based scheme is compact namely, its signatures contain only a constant number of lattice vectors. At the core of our construction are a new abstraction of the existing lossy identification (ID) schemes using dual-mode commitment schemes and a refinement of the framework by Diemert et al. (PKC 2021) which transforms a lossy ID scheme to a signature using sequential OR proofs. In combination, we obtain a tight generic construction of signatures from dual-mode commitments in the multi-user setting. Improving the work of Diemert et al., our new approach can be instantiated using not only the LWE assumption, but also an isogeny-based assumption. We stress that our LWE-based lossy ID scheme in the intermediate step uses a conceptually different idea than the previous lattice-based ones. Of independent interest, we formally rule out the possibility that the aforementioned ``ID-to-Signature'' methodology can work tightly using parallel OR proofs. In addition to the results of Fischlin et al. (EUROCRYPT 2020), our impossibility result shows a qualitative difference between both forms of OR proofs in terms of tightness

    CRYSTALS-Dilithium: A lattice-based digital signature scheme

    Get PDF
    In this paper, we present the lattice-based signature scheme Dilithium, which is a component of the CRYSTALS (Cryptographic Suite for Algebraic Lattices) suite that was submitted to NIST’s call for post-quantum cryptographic standards. The design of the scheme avoids all uses of discrete Gaussian sampling and is easily implementable in constant-time. For the same security levels, our scheme has a public key that is 2.5X smaller than the previously most efficient lattice-based schemes that did not use Gaussians, while having essentially the same signature size. In addition to the new design, we significantly improve the running time of the main component of many lattice-based constructions – the number theoretic transform. Our AVX2-based implementation results in a speed-up of roughly a factor of 2 over the previously best algorithms that appear in the literature. The techniques for obtaining this speed-up also have applications to other lattice-based schemes

    Lattice Signatures Without Trapdoors

    Get PDF
    We provide an alternative method for constructing lattice-based digital signatures which does not use the ``hash-and-sign\u27\u27 methodology of Gentry, Peikert, and Vaikuntanathan (STOC 2008). Our resulting signature scheme is secure, in the random oracle model, based on the worst-case hardness of the O~(n1.5)−SIVP\tilde{O}(n^{1.5})-SIVP problem in general lattices. The secret key, public key, and the signature size of our scheme are smaller than in all previous instantiations of the hash-and-sign signature, and our signing algorithm is also much simpler, requiring just a few matrix-vector multiplications and rejection samplings. We then also show that by slightly changing the parameters, one can get even more efficient signatures that are based on the hardness of the Learning With Errors problem. Our construction naturally transfers to the ring setting, where the size of the public and secret keys can be significantly shrunk, which results in the most practical to-date provably secure signature scheme based on lattices

    On Improving Communication Complexity in Cryptography

    Get PDF
    Cryptography grew to be much more than "the study of secret writing". Modern cryptography is concerned with establishing properties such as privacy, integrity and authenticity in protocols for secure communication and computation. This comes at a price: Cryptographic tools usually introduce an overhead, both in terms of communication complexity (that is, number and size of messages transmitted) and computational efficiency (that is, time and memory required). As in many settings communication between the parties involved is the bottleneck, this thesis is concerned with improving communication complexity in cryptographic protocols. One direction towards this goal is scalable cryptography: In many cryptographic schemes currently deployed, the security degrades linearly with the number of instances (e.g. encrypted messages) in the system. As this number can be huge in contexts like cloud computing, the parameters of the scheme have to be chosen considerably larger - and in particular depending on the expected number of instances in the system - to maintain security guarantees. We advance the state-of-the-art regarding scalable cryptography by constructing schemes where the security guarantees are independent of the number of instances. This allows to choose smaller parameters, even when the expected number of instances is immense. - We construct the first scalable encryption scheme with security against active adversaries which has both compact public keys and ciphertexts. In particular, we significantly reduce the size of the public key to only about 3% of the key-size of the previously most efficient scalable encryption scheme. (Gay,Hofheinz, and Kohl, CRYPTO, 2017) - We present a scalable structure-preserving signature scheme which improves both in terms of public-key and signature size compared to the previously best construction to about 40% and 56% of the sizes, respectively. (Gay, Hofheinz, Kohl, and Pan, EUROCRYPT, 2018) Another important area of cryptography is secure multi-party computation, where the goal is to jointly evaluate some function while keeping each party’s input private. In traditional approaches towards secure multi-party computation either the communication complexity scales linearly in the size of the function, or the computational efficiency is poor. To overcome this issue, Boyle, Gilboa, and Ishai (CRYPTO, 2016) introduced the notion of homomorphic secret sharing. Here, inputs are shared between parties such that each party does not learn anything about the input, and such that the parties can locally evaluate functions on the shares. Homomorphic secret sharing implies secure computation where the communication complexity only depends on the size of the inputs, which is typically much smaller than the size of the function. A different approach towards efficient secure computation is to split the protocol into an input-independent preprocessing phase, where long correlated strings are generated, and a very efficient online phase. One example for a useful correlation are authenticated Beaver triples, which allow to perform efficient multiplications in the online phase such that privacy of the inputs is preserved and parties deviating the protocol can be detected. The currently most efficient protocols implementing the preprocessing phase require communication linear in the number of triples to be generated. This results typically in high communication costs, as the online phase requires at least one authenticated Beaver triple per multiplication. We advance the state-of-the art regarding efficient protocols for secure computation with low communication complexity as follows. - We construct the first homomorphic secret sharing scheme for computing arbitrary functions in NC 1 (that is, functions that are computably by circuits with logarithmic depth) which supports message spaces of arbitrary size, has only negligible correctness error, and does not require expensive multiplication on ciphertexts. (Boyle, Kohl, and Scholl, EUROCRYPT, 2019) - We introduce the notion of a pseudorandom correlation generator for general correlations. Pseudorandom correlation generators allow to locally extend short correlated seeds into long pseudorandom correlated strings. We show that pseudorandom correlation generators can replace the preprocessing phase in many protocols, leading to a preprocessing phase with sublinear communication complexity. We show connections to homomorphic secret sharing schemes and give the first instantiation of pseudorandom correlation generators for authenticated Beaver triples at reasonable computational efficiency. (Boyle, Couteau, Gilboa, Ishai, Kohl, and Scholl, CRYPTO, 2019

    (One) Failure Is Not an Option:Bootstrapping the Search for Failures in Lattice-Based Encryption Schemes

    Get PDF
    Lattice-based encryption schemes are often subject to the possibility of decryption failures, in which valid encryptions are decrypted incorrectly. Such failures, in large number, leak information about the secret key, enabling an attack strategy alternative to pure lattice reduction. Extending the failure boosting\u27\u27 technique of D\u27Anvers et al. in PKC 2019, we propose an approach that we call directional failure boosting\u27\u27 that uses previously found failing ciphertexts\u27\u27 to accelerate the search for new ones. We analyse in detail the case where the lattice is defined over polynomial ring modules quotiented by and demonstrate it on a simple Mod-LWE-based scheme parametrized Ă  la Kyber768/Saber. We show that, using our technique, for a given secret key (single-target setting), the cost of searching for additional failing ciphertexts after one or more have already been found, can be sped up dramatically. We thus demonstrate that, in this single-target model, these schemes should be designed so that it is hard to even obtain one decryption failure. Besides, in a wider security model where there are many target secret keys (multi-target setting), our attack greatly improves over the state of the art

    Quantum-secure message authentication via blind-unforgeability

    Get PDF
    Formulating and designing unforgeable authentication of classical messages in the presence of quantum adversaries has been a challenge, as the familiar classical notions of unforgeability do not directly translate into meaningful notions in the quantum setting. A particular difficulty is how to fairly capture the notion of "predicting an unqueried value" when the adversary can query in quantum superposition. In this work, we uncover serious shortcomings in existing approaches, and propose a new definition. We then support its viability by a number of constructions and characterizations. Specifically, we demonstrate a function which is secure according to the existing definition by Boneh and Zhandry, but is clearly vulnerable to a quantum forgery attack, whereby a query supported only on inputs that start with 0 divulges the value of the function on an input that starts with 1. We then propose a new definition, which we call "blind-unforgeability" (or BU.) This notion matches "intuitive unpredictability" in all examples studied thus far. It defines a function to be predictable if there exists an adversary which can use "partially blinded" oracle access to predict values in the blinded region. Our definition (BU) coincides with standard unpredictability (EUF-CMA) in the classical-query setting. We show that quantum-secure pseudorandom functions are BU-secure MACs. In addition, we show that BU satisfies a composition property (Hash-and-MAC) using "Bernoulli-preserving" hash functions, a new notion which may be of independent interest. Finally, we show that BU is amenable to security reductions by giving a precise bound on the extent to which quantum algorithms can deviate from their usual behavior due to the blinding in the BU security experiment.Comment: 23+9 pages, v3: published version, with one theorem statement in the summary of results correcte

    Weak is Better: Tightly Secure Short Signatures from Weak PRFs

    Get PDF
    The Boyen-Li signature scheme [Asiacrypt\u2716] is a major theoretical breakthrough. Via a clever homomorphic evaluation of a pseudorandom function over their verification key, they achieve a reduction loss in security linear in the underlying security parameter and entirely independent of the number of message queries made, while still maintaining short signatures (consisting of a single short lattice vector). All previous schemes with such an independent reduction loss in security required a linear number of such lattice vectors, and even in the classical world, the only schemes achieving short signatures relied on non-standard assumptions. We improve on their result, providing a verification key smaller by a linear factor, a significantly tighter reduction with only a constant loss, and signing and verification algorithms that could plausibly run in about 1 second. Our main idea is to change the scheme in a manner that allows us to replace the pseudorandom function evaluation with an evaluation of a much more efficient weak pseudorandom function. As a matter of independent interest, we give an improved method of randomized inversion of the G gadget matrix [MP12], which reduces the noise growth rate in homomorphic evaluations performed in a large number of lattice-based cryptographic schemes, without incurring the high cost of sampling discrete Gaussians

    GCKSign: Simple and Efficient Signatures from Generalized Compact Knapsacks

    Get PDF
    In 2009, Lyubashevsky proposed a lattice-based signature scheme by applying the Fiat-Shamir transformation and proved its security under the generalized compact knapsack (GCK) problem. This scheme has a simple structure but has large signature and key sizes due to the security requirement of their security reduction. Dilithium, which was submitted to the NIST Post-Quantum Cryptography standardization and selected as one of the final candidates, is an improvement of the Lyubashevsky\u27s signature scheme and decreases key and signature sizes by modifying the form of a public key and including additional steps in key generation, signing, and verification algorithms. Thus, Dilithium has a more complex structure to implement compared to the Lyubashevsky\u27s scheme. To combine the strength of both signature schemes, we modify the Lyubashevsky\u27s signature scheme and present a new security proof that removes their security requirement. As a result, we propose a simple and practical GCKSign signature scheme based on the hardness of a new GCK assumption, called target-modified one-wayness of GCK function. The signature size of our signature scheme decreases 40 percent, the sum of signature and public key sizes decreases 25 percent, and the secret key size decreases 90 percent for the NIST security level III, compared to Dilithium. Furthermore, by the simplicity of our structure, the key generation, signing, and verification algorithms of our scheme run 2.4×\times, 1.7×\times, and 2.0×\times faster than those of Dilithium, respectively

    Frontiers in Lattice Cryptography and Program Obfuscation

    Get PDF
    In this dissertation, we explore the frontiers of theory of cryptography along two lines. In the first direction, we explore Lattice Cryptography, which is the primary sub-area of post-quantum cryptographic research. Our first contribution is the construction of a deniable attribute-based encryption scheme from lattices. A deniable encryption scheme is secure against not only eavesdropping attacks as required by semantic security, but also stronger coercion attacks performed after the fact. An attribute-based encryption scheme allows ``fine-grained'' access to ciphertexts, allowing for a decryption access policy to be embedded in ciphertexts and keys. We achieve both properties simultaneously for the first time from lattices. Our second contribution is the construction of a digital signature scheme that enjoys both short signatures and a completely tight security reduction from lattices. As a matter of independent interest, we give an improved method of randomized inversion of the G gadget matrix, which reduces the noise growth rate in homomorphic evaluations performed in a large number of lattice-based cryptographic schemes, without incurring the high cost of sampling discrete Gaussians. In the second direction, we explore Cryptographic Program Obfuscation. A program obfuscator is a type of cryptographic software compiler that outputs executable code with the guarantee that ``whatever can be hidden about the internal workings of program code, is hidden.'' Indeed, program obfuscation can be viewed as a ``universal and cryptographically-complete'' tool. Our third contribution is the first, full-scale implementation of secure program obfuscation in software. Our toolchain takes code written in a C-like programming language, specialized for cryptography, and produces secure, obfuscated software. Our fourth contribution is a new cryptanalytic attack against a variety of ``early'' program obfuscation candidates. We provide a general, efficiently-testable property for any two branching programs, called partial inequivalence, which we show is sufficient for launching an ``annihilation attack'' against several obfuscation candidates based on Garg-Gentry-Halevi multilinear maps
    • 

    corecore