61 research outputs found

    NewHope: A Mobile Implementation of a Post-Quantum Cryptographic Key Encapsulation Mechanism

    Get PDF
    NIST anticipates the appearance of large-scale quantum computers by 2036 [34], which will threaten widely used asymmetric algorithms, National Institute of Standards and Technology (NIST) launched a Post-Quantum Cryptography Standardization Project to find quantum-secure alternatives. NewHope post-quantum cryptography (PQC) key encapsulation mechanism (KEM) is the only Round 2 candidate to simultaneously achieve small key values through the use of a security problem with sufficient confidence its security, while mitigating any known vulnerabilities. This research contributes to NIST project’s overall goal by assessing the platform flexibility and resource requirements of NewHope KEMs on an Android mobile device. The resource requirements analyzed are transmission size as well as scheme runtime, central processing unit (CPU), memory, and energy usage. Results from each NewHope KEM instantiations are compared amongst each other, to a baseline application, and to results from previous work. NewHope PQC KEM was demonstrated to have sufficient flexibility for mobile implementation, competitive performance with other PQC KEMs, and to have competitive scheme runtime with current key exchange algorithms

    Information- and Coding-Theoretic Analysis of the RLWE Channel

    Full text link
    Several cryptosystems based on the \emph{Ring Learning with Errors} (RLWE) problem have been proposed within the NIST post-quantum cryptography standardization process, e.g. NewHope. Furthermore, there are systems like Kyber which are based on the closely related MLWE assumption. Both previously mentioned schemes feature a non-zero decryption failure rate (DFR). The combination of encryption and decryption for these kinds of algorithms can be interpreted as data transmission over noisy channels. To the best of our knowledge this paper is the first work that analyzes the capacity of this channel. We show how to modify the encryption schemes such that the input alphabets of the corresponding channels are increased. In particular, we present lower bounds on their capacities which show that the transmission rate can be significantly increased compared to standard proposals in the literature. Furthermore, under the common assumption of stochastically independent coefficient failures, we give lower bounds on achievable rates based on both the Gilbert-Varshamov bound and concrete code constructions using BCH codes. By means of our constructions, we can either increase the total bitrate (by a factor of 1.841.84 for Kyber and by factor of 77 for NewHope) while guaranteeing the same \emph{decryption failure rate} (DFR). Moreover, for the same bitrate, we can significantly reduce the DFR for all schemes considered in this work (e.g., for NewHope from 2−2162^{-216} to 2−127692^{-12769}).Comment: 13 pages, 4 figures, 3 table

    When Bad News Become Good News

    Get PDF
    Hard physical learning problems have been introduced as an alternative option to implement cryptosystems based on hard learning problems. Their high-level idea is to use inexact computing to generate erroneous computations directly, rather than to first compute correctly and add errors afterwards. Previous works focused on the applicability of this idea to the Learning Parity with Noise (LPN) problem as a first step, and formalized it as Learning Parity with Physical Noise (LPPN). In this work, we generalize it to the Learning With Errors (LWE) problem, formalized as Learning With Physical Errors (LWPE). We first show that the direct application of the design ideas used for LPPN prototypes leads to a new source of (mathematical) data dependencies in the error distributions that can reduce the security of the underlying problem. We then show that design tweaks can be used to avoid this issue, making LWPE samples natively robust against such data dependencies. We additionally put forward that these ideas open a quite wide design space that could make hard physical learning problems relevant in various applications. And we conclude by presenting a first prototype FPGA design confirming our claims

    When Bad News Become Good News Towards Usable Instances of Learning with Physical Errors

    Get PDF
    Hard physical learning problems have been introduced as an alternative option to implement cryptosystems based on hard learning problems. Their high-level idea is to use inexact computing to generate erroneous computations directly, rather than to first compute correctly and add errors afterwards. Previous works focused on the applicability of this idea to the Learning Parity with Noise (LPN) problem as a first step, and formalized it as Learning Parity with Physical Noise (LPPN). In this work, we generalize it to the Learning With Errors (LWE) problem, formalized as Learning With Physical Errors (LWPE). We first show that the direct application of the design ideas used for LPPN prototypes leads to a new source of (mathematical) data dependencies in the error distributions that can reduce the security of the underlying problem. We then show that design tweaks can be used to avoid this issue, making LWPE samples natively robust against such data dependencies. We additionally put forward that these ideas open a quite wide design space that could make hard physical learning problems relevant in various applications. And we conclude by presenting a first prototype FPGA design confirming our claims

    Tight bound on NewHope failure probability

    Get PDF
    NewHope Key Encapsulation Mechanism (KEM) has been presented at USENIX 2016 by Alchim et al. and is one of the remaining lattice-based candidates to the post-quantum standardization initiated by the NIST. However, despite the relative simplicity of the protocol, the bound on the decapsulation failure probability resulting from the original analysis is not tight. In this work we refine this analysis to get a tight upper-bound on this probability which happens to be much lower than what was originally evaluated. As a consequence we propose a set of alternnative parameters, increasing the security and the compactness of the scheme. However using a smaller modulus prevent the use of a full NTT algorithm to perform multiplications of elements in dimension 512 or 1024. Nonetheless, similarly to previous works, we combine different multiplication algorithms and show that our new parameters are competitive on a constant time vectorized implementation. Our most compact parameters bring a speed- up of 17% (resp. 11%) in performance but allow to gain more than 19% over the bandwidth requirements and to increase the security of 10% (resp. 7%) in dimension 512 (resp. 1024)

    Domain-specific Accelerators for Ideal Lattice-based Public Key Protocols

    Get PDF
    Post Quantum Lattice-Based Cryptography (LBC) schemes are increasingly gaining attention in traditional and emerging security problems, such as encryption, digital signature, key exchange, homomorphic encryption etc, to address security needs of both short and long-lived devices — due to their foundational properties and ease of implementation. However, LBC schemes induce higher computational demand compared to classic schemes (e.g., DSA, ECDSA) for equivalent security guarantees, making domain-specific acceleration a viable option for improving security and favor early adoption of LBC schemes by the semiconductor industry. In this paper, we present a workflow to explore the design space of domain-specific accelerators for LBC schemes, to target a diverse set of host devices, from resource-constrained IoT devices to high-performance computing platforms. We present design exploration results on workloads executing NewHope and BLISSB-I schemes accelerated by our domain-specific accelerators, with respect to a baseline without acceleration. We show that achieved performance with acceleration makes the execution of NewHope and BLISSB-I comparable to classic key exchange and digital signature schemes while retaining some form of general purpose programmability. In addition to 44% and 67% improvement in energy-delay product (EDP), we enhance performance (cycles) of the sign and verify steps in BLISSB-I schemes by 24% and 47%, respectively. Performance (EDP) improvement of server and client side of the NewHope key exchange is improved by 37% and 33% (52% and 48%), demonstrating the utility of the design space exploration framework

    Implementation and Benchmarking of Round 2 Candidates in the NIST Post-Quantum Cryptography Standardization Process Using Hardware and Software/Hardware Co-design Approaches

    Get PDF
    Performance in hardware has typically played a major role in differentiating among leading candidates in cryptographic standardization efforts. Winners of two past NIST cryptographic contests (Rijndael in case of AES and Keccak in case of SHA-3) were ranked consistently among the two fastest candidates when implemented using FPGAs and ASICs. Hardware implementations of cryptographic operations may quite easily outperform software implementations for at least a subset of major performance metrics, such as speed, power consumption, and energy usage, as well as in terms of security against physical attacks, including side-channel analysis. Using hardware also permits much higher flexibility in trading one subset of these properties for another. A large number of candidates at the early stages of the standardization process makes the accurate and fair comparison very challenging. Nevertheless, in all major past cryptographic standardization efforts, future winners were identified quite early in the evaluation process and held their lead until the standard was selected. Additionally, identifying some candidates as either inherently slow or costly in hardware helped to eliminate a subset of candidates, saving countless hours of cryptanalysis. Finally, early implementations provided a baseline for future design space explorations, paving a way to more comprehensive and fairer benchmarking at the later stages of a given cryptographic competition. In this paper, we first summarize, compare, and analyze results reported by other groups until mid-May 2020, i.e., until the end of Round 2 of the NIST PQC process. We then outline our own methodology for implementing and benchmarking PQC candidates using both hardware and software/hardware co-design approaches. We apply our hardware approach to 6 lattice-based CCA-secure Key Encapsulation Mechanisms (KEMs), representing 4 NIST PQC submissions. We then apply a software-hardware co-design approach to 12 lattice-based CCA-secure KEMs, representing 8 Round 2 submissions. We hope that, combined with results reported by other groups, our study will provide NIST with helpful information regarding the relative performance of a significant subset of Round 2 PQC candidates, assuming that at least their major operations, and possibly the entire algorithms, are off-loaded to hardware

    Polar Coding for Ring-LWE-Based Public Key Encryption

    Get PDF
    Cryptographic constructions based on ring learning with errors\textit{ring learning with errors} (RLWE) have emerged as one of the front runners for the standardization of post quantum public key cryptography. As the standardization process continues, optimizing specific parts of proposed schemes becomes a worthwhile endeavor. In this work we focus on using error correcting codes to alleviate a natural trade-off present in most schemes; namely, we would like a wider error distribution to increase security, but a wider error distribution comes at the cost of an increased probability of decryption error. The motivation of this work is to improve the security level of RLWE-based public key encryption (PKE) while keeping the target decryption failure rate (DFR) achievable using error-correcting codes. Specifically, we explore how to implement a family member of error correcting codes, known as polar codes, in RLWE-based PKE schemes in order to effectively lower the DFR. The dependency existing in the additive noise term is handled by mapping every error term (e.g., e,t,s,e1,e2e,t,s,e_1,e_2) under canonical embedding to the space HH where a product in the number field KK gives rise to a coordinate-wise product in HH. An attempt has been made to make the modulation constellation (message basis) fit in with the canonical basis. Furthermore, we exploit the actuality of some error terms known by the decoder to further lower the DFR. Using our method, the DFR is expected to be as low as 2−2982^{-298} for code rate 0.25, n=1024,q=12289n=1024,q=12289 and binomial parameter k=8k=8 as is exactly the setting of the post-quantum scheme NewHope; DFR is 2−1562^{-156} for code rate 0.25, n=1024,q=12289,k=16n=1024,q=12289,k=16. This new DFR margin enables us to improve the security level by 9.4%9.4\% compared with NewHope. Moreover, polar encoding and decoding have quasi-linear complexity O(Nlog⁡N)O(N\log N) and they can be implemented in constant time

    An upper bound on the decryption failure rate of static-key NewHope

    Get PDF
    We give a new proof that the decryption failure rate of NewHope512 is at most 2−398.82^{-398.8}. As in previous work, this failure rate is with respect to random, honestly generated, secret key and ciphertext pairs. However, our technique can also be applied to a fixed secret key. We demonstrate our technique on some subsets of the NewHope1024 key space, and we identify a large subset of NewHope1024 keys with failure rates of no more than 2−439.52^{-439.5}
    • 

    corecore