16 research outputs found

    Lattice Reduction Meets Key-Mismatch: New Misuse Attack on Lattice-Based NIST Candidate KEMs

    Get PDF
    Resistance to key misuse attacks is a vital property for key encapsulation mechanismsKEMsin NIST-PQC standardization process. In key mismatch attack, the adversary recovers reused secret key with the help of an oracle O\mathcal{O} that indicates whether the shared key matches or not. Key mismatch attack is more powerful when fewer oracle queries are required. A series of works tried to reduce query times, Qin et al. [AISACRYPT 2021] gave a systematic approach to finding lower bound of oracle queries for a category of KEMs, including NIST’s third-round candidate Kyber and Saber. In this paper, we found the aforementioned bound can be bypassed by combining Qin et al. (AISACRYPT 2021)’s key mismatch attack with a standard lattice attack. In particular, we explicitly build the relationship between the number of queries to the oracle and the bit security of the lattice-based KEMs. Our attack is inspired by the fact that each oracle query reveals partial information of reused secrets, and affects the mean and the covariance parameter of secrets, making the attack on lattice easier. In addition, We quantify such effect in theory and estimate the security loss for all NIST second-round candidate KEMs.Specifically, Our improved attack reduces the number of queries for Kyber512 by 34% from 1312 queries with bit security 107 to 865 with bit security 32. For Kyber768 and Kyber1024, our improved attack reduces the number of queries by 29% and 27% with bit security is 32

    Light the Signal: Optimization of Signal Leakage Attacks against LWE-Based Key Exchange

    Get PDF
    Key exchange protocols from the learning with errors (LWE) problem share many similarities with the Diffie–Hellman–Merkle (DHM) protocol, which plays a central role in securing our Internet. Therefore, there has been a long time effort in designing authenticated key exchange directly from LWE to mirror the advantages of DHM-based protocols. In this paper, we revisit signal leakage attacks and show that the severity of these attacks against LWE-based (authenticated) key exchange is still underestimated. In particular, by converting the problem of launching a signal leakage attack into a coding problem, we can significantly reduce the needed number of queries to reveal the secret key. Specifically, for DXL-KE we reduce the queries from 1,266 to only 29, while for DBS-KE, we need only 748 queries, a great improvement over the previous 1,074,434 queries. Moreover, our new view of signals as binary codes enables recognizing vulnerable schemes more easily. As such we completely recover the secret key of a password-based authenticated key exchange scheme by Dabra et al. with only 757 queries and partially reveal the secret used in a two-factor authentication by Wang et al. with only one query. The experimental evaluation supports our theoretical analysis and demonstrates the efficiency and effectiveness of our attacks. Our results caution against underestimating the power of signal leakage attacks as they are applicable even in settings with a very restricted number of interactions between adversary and victim

    Do Not Bound to a Single Position: Near-Optimal Multi-Positional Mismatch Attacks Against Kyber and Saber

    Get PDF
    Misuse resilience is an important security criterion in the evaluation of the NIST Post-quantum cryptography standardization process. In this paper, we propose new key mismatch attacks against Kyber and Saber, NIST\u27s selected scheme for encryption and one of the finalists in the third round of the NIST competition, respectively. Our novel idea is to recover partial information of multiple secret entries in each mismatch oracle call. These multi-positional attacks greatly reduce the expected number of oracle calls needed to fully recover the secret key. They also have significance in side-channel analysis. From the perspective of lower bounds, our new attacks falsify the Huffman bounds proposed in [Qin et al. ASIACRYPT 2021], where a one- positional mismatch adversary is assumed. Our new attacks can be bounded by the Shannon lower bounds, i.e., the entropy of the distribution generating each secret coefficient times the number of secret entries. We call the new attacks near-optimal since their query complexities are close to the Shannon lower bounds

    Sécurité étendue de la cryptographie fondée sur les réseaux euclidiens

    Get PDF
    Lattice-based cryptography is considered as a quantum-safe alternative for the replacement of currently deployed schemes based on RSA and discrete logarithm on prime fields or elliptic curves. It offers strong theoretical security guarantees, a large array of achievable primitives, and a competitive level of efficiency. Nowadays, in the context of the NIST post-quantum standardization process, future standards may ultimately be chosen and several new lattice-based schemes are high-profile candidates. The cryptographic research has been encouraged to analyze lattice-based cryptosystems, with a particular focus on practical aspects. This thesis is rooted in this effort.In addition to black-box cryptanalysis with classical computing resources, we investigate the extended security of these new lattice-based cryptosystems, employing a broad spectrum of attack models, e.g. quantum, misuse, timing or physical attacks. Accounting that these models have already been applied to a large variety of pre-quantum asymmetric and symmetric schemes before, we concentrate our efforts on leveraging and addressing the new features introduced by lattice structures. Our contribution is twofold: defensive, i.e. countermeasures for implementations of lattice-based schemes and offensive, i.e. cryptanalysis.On the defensive side, in view of the numerous recent timing and physical attacks, we wear our designer’s hat and investigate algorithmic protections. We introduce some new algorithmic and mathematical tools to construct provable algorithmic countermeasures in order to systematically prevent all timing and physical attacks. We thus participate in the actual provable protection of the GLP, BLISS, qTesla and Falcon lattice-based signatures schemes.On the offensive side, we estimate the applicability and complexity of novel attacks leveraging the lack of perfect correctness introduced in certain lattice-based encryption schemes to improve their performance. We show that such a compromise may enable decryption failures attacks in a misuse or quantum model. We finally introduce an algorithmic cryptanalysis tool that assesses the security of the mathematical problem underlying lattice-based schemes when partial knowledge of the secret is available. The usefulness of this new framework is demonstrated with the improvement and automation of several known classical, decryption-failure, and side-channel attacks.La cryptographie fondée sur les réseaux euclidiens représente une alternative prometteuse à la cryptographie asymétrique utilisée actuellement, en raison de sa résistance présumée à un ordinateur quantique universel. Cette nouvelle famille de schémas asymétriques dispose de plusieurs atouts parmi lesquels de fortes garanties théoriques de sécurité, un large choix de primitives et, pour certains de ses représentants, des performances comparables aux standards actuels. Une campagne de standardisation post-quantique organisée par le NIST est en cours et plusieurs schémas utilisant des réseaux euclidiens font partie des favoris. La communauté scientifique a été encouragée à les analyser car ils pourraient à l’avenir être implantés dans tous nos systèmes. L’objectif de cette thèse est de contribuer à cet effort.Nous étudions la sécurité de ces nouveaux cryptosystèmes non seulement au sens de leur résistance à la cryptanalyse en “boîte noire” à l’aide de moyens de calcul classiques, mais aussi selon un spectre plus large de modèles de sécurité, comme les attaques quantiques, les attaques supposant des failles d’utilisation, ou encore les attaques par canaux auxiliaires. Ces différents types d’attaques ont déjà été largement formalisés et étudiés par le passé pour des schémas asymétriques et symétriques pré-quantiques. Dans ce mémoire, nous analysons leur application aux nouvelles structures induites par les réseaux euclidiens. Notre travail est divisé en deux parties complémentaires : les contremesures et les attaques.La première partie regroupe nos contributions à l’effort actuel de conception de nouvelles protections algorithmiques afin de répondre aux nombreuses publications récentes d’attaques par canaux auxiliaires. Les travaux réalisés en équipe auxquels nous avons pris part on abouti à l’introduction de nouveaux outils mathématiques pour construire des contre-mesures algorithmiques, appuyées sur des preuves formelles, qui permettent de prévenir systématiquement les attaques physiques et par analyse de temps d’exécution. Nous avons ainsi participé à la protection de plusieurs schémas de signature fondés sur les réseaux euclidiens comme GLP, BLISS, qTesla ou encore Falcon.Dans une seconde partie consacrée à la cryptanalyse, nous étudions dans un premier temps de nouvelles attaques qui tirent parti du fait que certains schémas de chiffrement à clé publique ou d’établissement de clé peuvent échouer avec une faible probabilité. Ces échecs sont effectivement faiblement corrélés au secret. Notre travail a permis d’exhiber des attaques dites « par échec de déchiffrement » dans des modèles de failles d’utilisation ou des modèles quantiques. Nous avons d’autre part introduit un outil algorithmique de cryptanalyse permettant d’estimer la sécurité du problème mathématique sous-jacent lorsqu’une information partielle sur le secret est donnée. Cet outil s’est avéré utile pour automatiser et améliorer plusieurs attaques connues comme des attaques par échec de déchiffrement, des attaques classiques ou encore des attaques par canaux auxiliaires

    Towards Post-Quantum Security for Signal's X3DH Handshake

    Get PDF
    Modern key exchange protocols are usually based on the Diffie–Hellman (DH) primitive. The beauty of this primitive, among other things, is its potential reusage of key shares: DH shares can be either used a single time or in multiple runs. Since DH-based protocols are insecure against quantum adversaries, alternative solutions have to be found when moving to the post-quantum setting. However, most post-quantum candidates, including schemes based on lattices and even supersingular isogeny DH, are not known to be secure under key reuse. In particular, this means that they cannot be necessarily deployed as an immediate DH substitute in protocols. In this paper, we introduce the notion of a split key encapsulation mechanism (split KEM) to translate the desired key-reusability of a DH-based protocol to a KEM-based flow. We provide the relevant security notions of split KEMs and show how the formalism lends itself to lifting Signal’s X3DH handshake to the post-quantum KEM setting without additional message flows. Although the proposed framework conceptually solves the raised issues, instantiating it securely from post-quantum assumptions proved to be non-trivial. We give passively secure instantiations from (R)LWE, yet overcoming the above-mentioned insecurities under key reuse in the presence of active adversaries remains an open problem. Approaching one- sided key reuse, we provide a split KEM instantiation that allows such reuse based on the KEM introduced by Kiltz (PKC 2007), which may serve as a post-quantum blueprint if the underlying hardness assumption (gap hashed Diffie–Hellman) holds for the commutative group action of CSIDH (Asiacrypt 2018). The intention of this paper hence is to raise awareness of the challenges arising when moving to KEM-based key exchange protocols with key-reusability, and to propose split KEMs as a specific target for instantiation in future research

    Belief Propagation Meets Lattice Reduction: Security Estimates for Error-Tolerant Key Recovery from Decryption Errors

    Get PDF
    In LWE-based KEMs, observed decryption errors leak information about the secret key in the form of equations or inequalities. Several practical fault attacks have already exploited such leakage by either directly applying a fault or enabling a chosen-ciphertext attack using a fault. When the leaked information is in the form of inequalities, the recovery of the secret key is not trivial. Recent methods use either statistical or algebraic methods (but not both), with some being able to handle incorrect information. Having in mind that integration of the side-channel information is a crucial part of several classes of implementation attacks on LWE-based schemes, it is an important question whether statistically processed information can be successfully integrated in lattice reduction algorithms. We answer this question positively by proposing an error-tolerant combination of statistical and algebraic methods that make use of the advantages of both approaches. The combination enables us to improve upon existing methods -- we use both fewer inequalities and are more resistant to errors. We further provide precise security estimates based on the number of available inequalities. Our recovery method applies to several types of implementation attacks in which decryption errors are used in a chosen-ciphertext attack. We practically demonstrate the improved performance of our approach in a key-recovery attack against Kyber with fault-induced decryption errors

    Decryption Failure Attacks on Post-Quantum Cryptography

    Get PDF
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results

    Post-quantum WireGuard

    Get PDF
    In this paper we present PQ-WireGuard, a post-quantum variant of the handshake in the WireGuard VPN protocol (NDSS 2017). Unlike most previous work on post-quantum security for real-world protocols, this variant does not only consider post-quantum confidentiality (or forward secrecy) but also post-quantum authentication. To achieve this, we replace the Diffie-Hellman-based handshake by a more generic approach only using key-encapsulation mechanisms (KEMs). We establish security of PQ-WireGuard, adapting the security proofs for WireGuard in the symbolic model and in the standard model to our construction. We then instantiate this generic construction with concrete post-quantum secure KEMs, which we carefully select to achieve high security and speed. We demonstrate competitiveness of PQ-WireGuard presenting extensive benchmarking results comparing to widely deployed VPN solutions

    Leg Ulcer Outcomes

    Get PDF
    Background Venous disease is the most common cause of leg ulceration. Treatment of superficial venous reflux has been shown to reduce the rate of ulcer recurrence but the effect of early endovenous ablation of superficial venous reflux on ulcer healing remains unclear. It is generally accepted that there is considerable global variation in the management of leg ulcers. Objectives To determine: the clinical and cost-effectiveness of early endovenous treatment of superficialvenous reflux in addition to standard care compared to standard care alone in patients with venous ulceration; the current standards of global management of venous leg management and the impact on these following the results of the randomised controlled trial. Methods i. The Early Venous Reflux Ablation Trial (EVRA) multi-centre randomised clinical trial of 450 participants compared early versus deferred intervention at 12 months and at 3.5 years. ii. Health professionals treating patients with leg ulcers globally were surveyed before and after the publication of the RCT results to gain insight on the management of venous leg ulceration, and subsequent impact on practice. Results i. EVRA: i. time to ulcer healing was shorter in the early group at 12 months; no clear difference in time to first ulcer recurrence at 3.5 years; early intervention at 3 years is 91% likely to be cost-effective at £20,000/QALY. ii. Surveys: ⁃ Pre/post-EVRA UK primary care: 90/643 responses received; Pre/post-EVRA global clinicians: 799/644 responses were received. Conclusions The EVRA RCT showed that early intervention reduces the time to healing of venous leg ulcers, does not affect the time to recurrent ulceration but is highly likely to be cost-effective and therefore is beneficial for both patients and healthcare providers. The surveys demonstrated that the management of venous ulceration is disparate globally. It is likely that the EVRA RCT results influenced the timing of intervention worldwide.Open Acces
    corecore