98 research outputs found

    Additively Homomorphic Encryption with d-Operand Multiplications

    Get PDF
    The search for encryption schemes that allow to evaluate functions (or circuits) over encrypted data has attracted a lot of attention since the seminal work on this subject by Rivest, Adleman and Dertouzos in 1978. In this work we define a theoretical object, chained encryption schemes, which allow an efficient evaluation of polynomials of degree d over encrypted data. Chained encryption schemes are generically constructed by concatenating cryptosystems with the appropriate homomorphic properties; such schemes are common in lattice-based cryptography. As a particular instantiation we propose a chained encryption scheme whose IND-CPA security is based on a worst-case/average-case reduction from uSVP

    Somewhat Homomorphic Encryption based on Random Codes

    Get PDF
    We present a secret-key encryption scheme based on random rank metric ideal linear codes with a simple decryption circuit. It supports unlimited homomorphic additions and plaintext absorptions as well as a fixed arbitrary number of homomorphic multiplications. We study a candidate bootstrapping algorithm that requires no multiplication but additions and plaintext absorptions only. This latter operation is therefore very efficient in our scheme, whereas bootstrapping is usually the main reason which penalizes the performance of other fully homomorphic encryption schemes. However, the security reduction of our scheme restricts the number of independent ciphertexts that can be published. In particular, this prevents to securely evaluate the bootstrapping algorithm as the number of ciphertexts in the key switching material is too large. Our scheme is nonetheless the first somewhat homomorphic encryption scheme based on random ideal codes and a first step towards full homomorphism. Random ideal codes give stronger security guarantees as opposed to existing constructions based on highly structured codes. We give concrete parameters for our scheme that shows that it achieves competitive sizes and performance, with a key size of 3.7 kB and a ciphertext size of 0.9 kB when a single multiplication is allowed

    Sampling From Arbitrary Centered Discrete Gaussians For Lattice-Based Cryptography

    Get PDF
    Non-Centered Discrete Gaussian sampling is a fundamental building block in many lattice-based constructions in cryptography, such as signature and identity-based encryption schemes. On the one hand, the center-dependent approaches, e.g. cumulative distribution tables (CDT), Knuth-Yao, the alias method, discrete Zigurat and their variants, are the fastest known algorithms to sample from a discrete Gaussian distribution. However, they use a relatively large precomputed table for each possible real center in [0,1) making them impracticable for non-centered discrete Gaussian sampling. On the other hand, rejection sampling allows to sample from a discrete Gaussian distribution for all real centers without prohibitive precomputation cost but needs costly floating-point arithmetic and several trials per sample. In this work, we study how to reduce the number of centers for which we have to precompute tables and propose a non-centered CDT algorithm with practicable size of precomputed tables as fast as its centered variant. Finally, we provide some experimental results for our open-source C++ implementation indicating that our sampler increases the rate of Peikert’s algorithm for sampling from arbitrary lattices (and cosets) by a factor 3 with precomputation storage up to 6.2 MB

    Classification of extremal and ss-extremal binary self-dual codes of length 38

    Full text link
    In this paper we classify all extremal and ss-extremal binary self-dual codes of length 38. There are exactly 2744 extremal [38,19,8][38,19,8] self-dual codes, two ss-extremal [38,19,6][38,19,6] codes, and 1730 ss-extremal [38,19,8][38,19,8] codes. We obtain our results from the use of a recursive algorithm used in the recent classification of all extremal self-dual codes of length 36, and from a generalization of this recursive algorithm for the shadow. The classification of ss-extremal [38,19,6][38,19,6] codes permits to achieve the classification of all ss-extremal codes with d=6.Comment: revised version - paper submitted (4/4/2011) to IEEE trans. Information and accepted 20/10/201

    NFLlib: NTT-based Fast Lattice Library

    Get PDF
    International audienceRecent years have witnessed an increased interest in lattice cryptography. Besides its strong security guarantees, its simplicity and versatility make this powerful theoretical tool a promising competitive alternative to classical cryptographic schemes. In this paper, we introduce NFLlib, an efficient and open-source C++ library dedicated to ideal lattice cryptography in the widely-spread polynomial ring Zp[x]/(x n + 1) for n a power of 2. The library combines al-gorithmic optimizations (Chinese Remainder Theorem, optimized Number Theoretic Transform) together with programming optimization techniques (SSE and AVX2 specializations, C++ expression templates, etc.), and will be fully available under the GPL license. The library compares very favorably to other libraries used in ideal lattice cryptography implementations (namely the generic number theory libraries NTL and flint implementing polynomial arithmetic, and the optimized library for lattice homomorphic encryption HElib): restricting the library to the aforementioned polynomial ring allows to gain several orders of magnitude in efficiency

    Improving Additive and Multiplicative Homomorphic Encryption Schemes Based on Worst-Case Hardness Assumptions}

    Get PDF
    In CRYPTO 2010, Aguilar et al. proposed a somewhat homomorphic encryption scheme, i.e. an encryption scheme allowing to compute a limited amount of sums and products over encrypted data, with a security reduction from LWE over general lattices. General lattices (as opposed to ideal lattices) do not have an inherent multiplicative structure but, using a tensorial product, Aguilar et al. managed to obtain a scheme allowing to compute products with a polylogarithmic amount of operands. In this paper we present an alternative construction allowing to compute products with polynomially-many operands while preserving the security reductions of the initial scheme. Unfortunately, despite this improvement our construction seems to be incompatible with Gentry\u27s seminal transformation allowing to obtain fully-homomorphic encryption schemes. Recently, Brakerski et al. used the tensorial product approach introduced by Aguilar et al. in a new alternative way which allows to radically improve the performance of the obtained scheme. Based on this approach, and using two nice optimizations, their scheme is able to evaluate products with exponentially-many operands and can be transformed into an efficient fully-homomorphic encryption scheme while being based on general lattice problems. However, even if these results outperform the construction presented here, we believe the modifications we suggest for Aguilar et al.\u27s schemes are of independent interest

    The Return of the SDitH

    Get PDF
    This paper presents a code-based signature scheme based on the well-known syndrome decoding (SD) problem. The scheme builds upon a recent line of research which uses the Multi-Party-Computation-in-the-Head (MPCitH) approach to construct efficient zero-knowledge proofs, such as Syndrome Decoding in the Head (SDitH), and builds signature schemes from them using the Fiat-Shamir transform. At the heart of our proposal is a new approach, Hypercube-MPCitH, to amplify the soundness of any MPC protocol that uses additive secret sharing. An MPCitH protocol with NN parties can be repeated DD times using parallel composition to reach the same soundness as a protocol run with NDN^D parties. However, the former comes with DD times higher communication costs, often mainly contributed by the usage of DD `auxiliary\u27 states (which in general have a significantly bigger impact on size than random states). Instead of that, we begin by generating NDN^D shares, arranged into a DD-dimensional hypercube of side NN containing only one `auxiliary\u27 state. We derive from this hypercube DD sharings of size NN which are used to run DD instances of an NN party MPC protocol. Hypercube-MPCitH leads to a protocol with 1/ND1/N^D soundness error, requiring NDN^D offline computation, but with only NDN\cdot D online computation, and only 11 `auxiliary\u27. As the (potentially offline) share generation phase is generally inexpensive, this leads to trade-offs that are superior to just using parallel composition. Our novel method of share generation and aggregation not only improves certain MPCitH protocols in general but also shows in concrete improvements of signature schemes. Specifically, we apply it to the work of Feneuil, Joux, and Rivain (CRYPTO\u2722) on code-based signatures, and obtain a new signature scheme that achieves a 8.1x improvement in global runtime and a 30x improvement in online runtime for their shortest signatures size (8,481 Bytes). It is also possible to leverage the fact that most computations are offline to define parameter sets leading to smaller signatures: 6,784 Bytes for 26 ms offline and 5,689 Bytes for 320 ms offline. For NIST security level 1, online signature cost is around 3 million cycles (<<1 ms on commodity processors), regardless of signature size

    Constant time algorithms for ROLLO-I-128

    Get PDF
    In this work, we propose different techniques that can be used to implement the ROLLO, and partially RQC, family of algorithms in a standalone, efficient and constant time library. For simplicity, we focus our attention on one specific instance of this family, ROLLO-I-128. For each of these techniques, we present explicit code (with intrinsics when required), or pseudo-code and performance measures to show their impact. More precisely, we use a combination of original and known results and describe procedures for Gaussian reduction of binary matrices, generation of vectors of given rank, multiplication with lazy reduction and inversion of polynomials in a composite Galois field. We also carry out a global performance analysis to show the impact of these improvements on ROLLO-I-128. Through the SUPERCOP framework, we compare it to other 128-bit secure KEMs in the NIST competition. To our knowledge, this is the first optimized full constant time implementation of ROLLO-I-128

    Batch Signatures, Revisited

    Get PDF
    We revisit batch signatures (previously considered in a draft RFC, and used in multiple recent works), where a single, potentially expensive, inner digital signature authenticates a Merkle tree constructed from many messages. We formalise a construction and prove its unforgeability and privacy properties. We also show that batch signing allows us to scale slow signing algorithms, such as those recently selected for standardisation as part of NIST\u27s post-quantum project, to high throughput, with a mild increase in latency. For the example of Falcon-512 in TLS, we can increase the amount of connections per second by a factor 3.2x, at the cost of an increase in the signature size by ~14% and the median latency by ~25%, where both are ran on the same 30 core server. We also discuss applications where batch signatures allow us to increase throughput and to save bandwidth. For example, again for Falcon-512, once one batch signature is available, the additional bandwidth for each of the remaining N-1 is only 82 bytes
    corecore