16 research outputs found

    A new idea in response to fast correlation attacks on small-state stream ciphers

    Get PDF
    In the conference “Fast Software Encryption 2015”, a new line of research was proposed by introducing the first small-state stream cipher (SSC). The goal was to design lightweight stream ciphers for hardware application by going beyond the rule that the internal state size must be at least twice the intended security level. Time-memory-data trade-off (TMDTO) attacks and fast correlation attacks (FCA) were successfully applied to all proposed SSCs which can be implemented by less than 1000 gate equivalents in hardware. It is possible to increase the security of stream ciphers against FCA by exploiting more complicated functions for the nonlinear feedback shift register and the output function, but we use lightweight functions to design the lightest SSC in the world while providing more security against FCA. Our proposed cipher provides 80-bit security against TMDTO distinguishing attacks, while Lizard and Plantlet provide only 60-bit and 58-bit security against distinguishing attacks, respectively. Our main contribution is to propose a lightweight round key function with a very long period that increases the security of SSCs against FCA

    Links between Division Property and Other Cube Attack Variants

    Get PDF
    A theoretically reliable key-recovery attack should evaluate not only the non-randomness for the correct key guess but also the randomness for the wrong ones as well. The former has always been the main focus but the absence of the latter can also cause self-contradicted results. In fact, the theoretic discussion of wrong key guesses is overlooked in quite some existing key-recovery attacks, especially the previous cube attack variants based on pure experiments. In this paper, we draw links between the division property and several variants of the cube attack. In addition to the zero-sum property, we further prove that the bias phenomenon, the non-randomness widely utilized in dynamic cube attacks and cube testers, can also be reflected by the division property. Based on such links, we are able to provide several results: Firstly, we give a dynamic cube key-recovery attack on full Grain-128. Compared with Dinur et al.’s original one, this attack is supported by a theoretical analysis of the bias based on a more elaborate assumption. Our attack can recover 3 key bits with a complexity 297.86 and evaluated success probability 99.83%. Thus, the overall complexity for recovering full 128 key bits is 2125. Secondly, now that the bias phenomenon can be efficiently and elaborately evaluated, we further derive new secure bounds for Grain-like primitives (namely Grain-128, Grain-128a, Grain-V1, Plantlet) against both the zero-sum and bias cube testers. Our secure bounds indicate that 256 initialization rounds are not able to guarantee Grain-128 to resist bias-based cube testers. This is an efficient tool for newly designed stream ciphers for determining the number of initialization rounds. Thirdly, we improve Wang et al.’s relaxed term enumeration technique proposed in CRYPTO 2018 and extend their results on Kreyvium and ACORN by 1 and 13 rounds (reaching 892 and 763 rounds) with complexities 2121.19 and 2125.54 respectively. To our knowledge, our results are the current best key-recovery attacks on these two primitives

    An Attack on the LILLE Stream Cipher

    Get PDF
    A few small-state stream ciphers (SSCs) were proposed for constrained environments. All of the SSCs before the LILLE stream cipher suffered from distinguishing attacks and fast correlation attacks. The designers of LILLE claimed that it is based on the well-studied two-key Even-Mansour scheme and so is resistant to various types of attacks. This paper proposes a distinguishing attack on LILLE, the first attack since 2018. The data and time complexities to attack LILLE-40 are 2^(50.7) and 2^(41.2), respectively. We verified practically our attack on a halved version of LILLE-40. A countermeasure is suggested to strengthen LILLE against the proposed attack. We hope our attack opens the door to more cryptanalyses of LILLE

    Lightweight cryptography on ultra-constrained RFID devices

    Full text link
    Devices of extremely small computational power like RFID tags are used in practice to a rapidly growing extent, a trend commonly referred to as ubiquitous computing. Despite their severely constrained resources, the security burden which these devices have to carry is often enormous, as their fields of application range from everyday access control to human-implantable chips providing sensitive medical information about a person. Unfortunately, established cryptographic primitives such as AES are way to 'heavy' (e.g., in terms of circuit size or power consumption) to be used in corresponding RFID systems, calling for new solutions and thus initiating the research area of lightweight cryptography. In this thesis, we focus on the currently most restricted form of such devices and will refer to them as ultra-constrained RFIDs. To fill this notion with life and in order to create a profound basis for our subsequent cryptographic development, we start this work by providing a comprehensive summary of conditions that should be met by lightweight cryptographic schemes targeting ultra-constrained RFID devices. Building on these insights, we then turn towards the two main topics of this thesis: lightweight authentication and lightweight stream ciphers. To this end, we first provide a general introduction to the broad field of authentication and study existing (allegedly) lightweight approaches. Drawing on this, with the (n,k,L)^-protocol, we suggest our own lightweight authentication scheme and, on the basis of corresponding hardware implementations for FPGAs and ASICs, demonstrate its suitability for ultra-constrained RFIDs. Subsequently, we leave the path of searching for dedicated authentication protocols and turn towards stream cipher design, where we first revisit some prominent classical examples and, in particular, analyze their state initialization algorithms. Following this, we investigate the rather young area of small-state stream ciphers, which try to overcome the limit imposed by time-memory-data tradeoff (TMD-TO) attacks on the security of classical stream ciphers. Here, we present some new attacks, but also corresponding design ideas how to counter these. Paving the way for our own small-state stream cipher, we then propose and analyze the LIZARD-construction, which combines the explicit use of packet mode with a new type of state initialization algorithm. For corresponding keystream generator-based designs of inner state length n, we prove a tight (2n/3)-bound on the security against TMD-TO key recovery attacks. Building on these theoretical results, we finally present LIZARD, our new lightweight stream cipher for ultra-constrained RFIDs. Its hardware efficiency and security result from combining a Grain-like design with the LIZARD-construction. Most notably, besides lower area requirements, the estimated power consumption of LIZARD is also about 16 percent below that of Grain v1, making it particularly suitable for passive RFID tags, which obtain their energy exclusively through an electromagnetic field radiated by the reading device. The thesis is concluded by an extensive 'Future Research Directions' chapter, introducing various new ideas and thus showing that the search for lightweight cryptographic solutions is far from being completed

    Correlation Cube Attacks: From Weak-Key Distinguisher to Key Recovery

    Get PDF
    In this paper, we describe a new variant of cube attacks called correlation cube attack. The new attack recovers the secret key of a cryptosystem by exploiting conditional correlation properties between the superpoly of a cube and a specific set of low-degree polynomials that we call a basis, which satisfies that the superpoly is a zero constant when all the polynomials in the basis are zeros. We present a detailed procedure of correlation cube attack for the general case, including how to find a basis of the superpoly of a given cube. One of the most significant advantages of this new analysis technique over other variants of cube attacks is that it converts from a weak-key distinguisher to a key recovery attack. As an illustration, we apply the attack to round-reduced variants of the stream cipher Trivium. Based on the tool of numeric mapping introduced by Liu at CRYPTO 2017, we develop a specific technique to efficiently find a basis of the superpoly of a given cube as well as a large set of potentially good cubes used in the attack on Trivium variants, and further set up deterministic or probabilistic equations on the key bits according to the conditional correlation properties between the superpolys of the cubes and their bases. For a variant when the number of initialization rounds is reduced from 1152 to 805, we can recover about 7-bit key information on average with time complexity 2442^{44}, using 2452^{45} keystream bits and preprocessing time 2512^{51}. For a variant of Trivium reduced to 835 rounds, we can recover about 5-bit key information on average with the same complexity. All the attacks are practical and fully verified by experiments. To the best of our knowledge, they are thus far the best known key recovery attacks for these variants of Trivium, and this is the first time that a weak-key distinguisher on Trivium stream cipher can be converted to a key recovery attack

    Systematic Characterization of Power Side Channel Attacks for Residual and Added Vulnerabilities

    Get PDF
    Power Side Channel Attacks have continued to be a major threat to cryptographic devices. Hence, it will be useful for designers of cryptographic systems to systematically identify which type of power Side Channel Attacks their designs remain vulnerable to after implementation. It’s also useful to determine which additional vulnerabilities they have exposed their devices to, after the implementation of a countermeasure or a feature. The goal of this research is to develop a characterization of power side channel attacks on different encryption algorithms\u27 implementations to create metrics and methods to evaluate their residual vulnerabilities and added vulnerabilities. This research studies the characteristics that influence the power side leakage, classifies them, and identifies both the residual vulnerabilities and the added vulnerabilities. Residual vulnerabilities are defined as the traits that leave the implementation of the algorithm still vulnerable to power Side Channel Attacks (SCA), sometimes despite the attempt at implementing countermeasures by the designers. Added vulnerabilities to power SCA are defined as vulnerabilities created or enhanced by the algorithm implementations and/or modifications. The three buckets in which we categorize the encryption algorithm implementations are: i. Countermeasures against power side channel attacks, ii. IC power delivery network impact to power leakage (including voltage regulators), iii. Lightweight ciphers and applications for the Internet of Things (IoT ) From the characterization of masking countermeasures, an example outcome developed is that masking schemes, when uniformly distributed random masks are used, are still vulnerable to collision power attacks. Another example outcome derived is that masked AES, when glitches occur, is still vulnerable to Differential Power Analysis (DPA). We have developed a characterization of power side-channel attacks on the hardware implementations of different symmetric encryption algorithms to provide a detailed analysis of the effectiveness of state-of-the-art countermeasures against local and remote power side-channel attacks. The characterization is accomplished by studying the attributes that influence power side-channel leaks, classifying them, and identifying both residual vulnerabilities and added vulnerabilities. The evaluated countermeasures include masking, hiding, and power delivery network scrambling. But, vulnerability to DPA depends largely on the quality of the leaked power, which is impacted by the characteristics of the device power delivery network. Countermeasures and deterrents to power side-channel attacks targeting the alteration or scrambling of the power delivery network have been shown to be effective against local attacks where the malicious agent has physical access to the target system. However, remote attacks that capture the leaked information from within the IC power grid are shown herein to be nonetheless effective at uncovering the secret key in the presence of these countermeasures/deterrents. Theoretical studies and experimental analysis are carried out to define and quantify the impact of integrated voltage regulators, voltage noise injection, and integration of on-package decoupling capacitors for both remote and local attacks. An outcome yielded by the studies is that the use of an integrated voltage regulator as a countermeasure is effective for a local attack. However, remote attacks are still effective and hence break the integrated voltage regulator countermeasure. From experimental analysis, it is observed that within the range of designs\u27 practical values, the adoption of on-package decoupling capacitors provides only a 1.3x increase in the minimum number of traces required to discover the secret key. However, the injection of noise in the IC power delivery network yields a 37x increase in the minimum number of traces to discover. Thus, increasing the number of on-package decoupling capacitors or the impedance between the local probing site and the IC power grid should not be relied on as countermeasures to power side-channel attacks, for remote attack schemes. Noise injection should be considered as it is more effective at scrambling the leaked signal to eliminate sensitive identifying information. However, the analysis and experiments carried out herein are applied to regular symmetric ciphers which are not suitable for protecting Internet of Things (IoT) devices. The protection of communications between IoT devices is of great concern because the information exchanged contains vital sensitive data. Malicious agents seek to exploit those data to extract secret information about the owners or the system. Power side channel attacks are of great concern on these devices because their power consumption unintentionally leaks information correlatable to the device\u27s secret data. Several studies have demonstrated the effectiveness of authenticated encryption with advanced data (AEAD), in protecting communications with these devices. In this research, we have proposed a comprehensive evaluation of the ten algorithm finalists of the National Institute of Standards and Technology (NIST) IoT lightweight cipher competition. The study shows that, nonetheless, some still present some residual vulnerabilities to power side channel attacks (SCA). For five ciphers, we propose an attack methodology as well as the leakage function needed to perform correlation power analysis (CPA). We assert that Ascon, Sparkle, and PHOTON-Beetle security vulnerability can generally be assessed with the security assumptions Chosen ciphertext attack and leakage in encryption only, with nonce-misuse resilience adversary (CCAmL1) and Chosen ciphertext attack and leakage in encryption only with nonce-respecting adversary (CCAL1) , respectively. However, the security vulnerability of GIFT-COFB, Grain, Romulus, and TinyJambu can be evaluated more straightforwardly with publicly available leakage models and solvers. They can also be assessed simply by increasing the number of traces collected to launch the attack

    Observations on the Dynamic Cube Attack of 855-Round TRIVIUM from Crypto\u2718

    Get PDF
    Recently, another kind of dynamic cube attack is proposed by Fu et al. With some key guesses and a transformation in the output bit, they claim that, when the key guesses are correct, the degree of the transformed output bit can drop so significantly that the cubes of lower dimension can not exist, making the output bit vulnerable to the zero-sum cube tester using slightly higher dimensional cubes. They applied their method to 855-round TRIVIUM. In order to verify the correctness of their result, they even proposed a practical attack on 721-round TRIVIUM claiming that the transformed output bit after 721-rounds of initialization does not contain cubes of dimensions 31 and below. However, the degree evaluation algorithm used by Fu et al. is innovative and complicated, and its complexity is not given. Their algorithm can only be implemented on huge clusters and cannot be verified by existing theoretic tools. In this paper, we theoretically analyze the dynamic cube attack method given by Fu et al. using the division property and MILP modeling technique. Firstly, we draw links between the division property and Fu et al.\u27s dynamic cube attack so that their method can be described as a theoretically well founded and computationally economic MILP-aided division-property-based cube attack. With the MILP model drawn according to the division property, we analyzed the 721-round TRIVIUM in detail and find some interesting results: \begin​{enumerate} \item The degree evaluation using our MILP method is more accurate than that of Fu et al.\u27s. Fu et al. prove that the degree of pure z721z721 is 40 while our method gives 29. We practically proved the correctness of our method by trying thousands of random keys, random 30-dimensional cubes and random assignments to non-cube IVs finding that the summations are constantly 0. \item For the transformed output bit (1+s2901)⋅z721(1+s1290)⋅z721, we proved the same degree 31 as Fu et al. and we also find 32-dimensional cubes have zero-sum property for correct key guesses. But since the degree of pure z721z721 is only 29, the 721-round practical attack on TRIVIUM is violating the principle of Fu et al.\u27s work: after the transformation in the output bit, when the key guesses are correct, the degree of the transformed output bit has not dropped but risen. \item Now that the degree theoretic foundation of the 721-round attack has been violated, we also find out that the key-recovery attack cannot be carried out either. We theoretically proved and practically verified that no matter the key guesses are correct or incorrect, the summation over 32-dimensional cube are always 0. So, no key bit can be recovered at all. \end{enumerate} All these analysis on 721-round TRIVIUM can be verified practically and we open our C++ source code for implementation as well. Secondly, we revisit their 855-round result. Our MILP model reveal that the 855-round result suffers from the same problems with its 721-round counterpart. We provide theoretic evidence that, after their transformation, the degree of the output bit is more likely to rise rather than drop. Furthermore, since Fu \etal\u27s degree evaluation is written in an unclear manner and no complexity analysis is given, we rewrite the algorithm according to their main ideas and supplement a detailed complexity analysis. Our analysis indicates that a precise evaluation to the degree requires complexities far beyond practical reach. We also demonstrate that further abbreviation to our rewritten algorithm can result in wrong evaluation. This might be the reason why Fu \etal give such a degree evaluation. This is also an additional argument against Fu \etal\u27s dynamic cube attack method. Thirdly, the selection of Fu \etal\u27s cube dimension is also questionable. According to our experiments and existing theoretic results, there is high risk that the correct key guesses and wrong ones share the same zero-sum property using Fu \etal\u27s cube testers. As a remedy, we suggest that concrete cubes satisfying particular conditions should be identified rather than relying on the IV-degree drop hypothesis. To conclude, Fu \etal\u27s dynamic cube attack on 855-round TRIVIUM is questionable. 855-round as well as 840-and-up-round TRIVIUM should still be open for further convincible cryptanalysis

    Analyse et Conception d'Algorithmes de Chiffrement Légers

    Get PDF
    The work presented in this thesis has been completed as part of the FUI Paclido project, whose aim is to provide new security protocols and algorithms for the Internet of Things, and more specifically wireless sensor networks. As a result, this thesis investigates so-called lightweight authenticated encryption algorithms, which are designed to fit into the limited resources of constrained environments. The first main contribution focuses on the design of a lightweight cipher called Lilliput-AE, which is based on the extended generalized Feistel network (EGFN) structure and was submitted to the Lightweight Cryptography (LWC) standardization project initiated by NIST (National Institute of Standards and Technology). Another part of the work concerns theoretical attacks against existing solutions, including some candidates of the nist lwc standardization process. Therefore, some specific analyses of the Skinny and Spook algorithms are presented, along with a more general study of boomerang attacks against ciphers following a Feistel construction.Les travaux présentés dans cette thèse s’inscrivent dans le cadre du projet FUI Paclido, qui a pour but de définir de nouveaux protocoles et algorithmes de sécurité pour l’Internet des Objets, et plus particulièrement les réseaux de capteurs sans fil. Cette thèse s’intéresse donc aux algorithmes de chiffrements authentifiés dits à bas coût ou également, légers, pouvant être implémentés sur des systèmes très limités en ressources. Une première partie des contributions porte sur la conception de l’algorithme léger Lilliput-AE, basé sur un schéma de Feistel généralisé étendu (EGFN) et soumis au projet de standardisation international Lightweight Cryptography (LWC) organisé par le NIST (National Institute of Standards and Technology). Une autre partie des travaux se concentre sur des attaques théoriques menées contre des solutions déjà existantes, notamment un certain nombre de candidats à la compétition LWC du NIST. Elle présente donc des analyses spécifiques des algorithmes Skinny et Spook ainsi qu’une étude plus générale des attaques de type boomerang contre les schémas de Feistel

    Implementação em software de cifradores autenticados para processadores ARM

    Get PDF
    Orientador: Julio César López HernándezDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Algoritmos de cifração autenticada são ferramentas usadas para proteger dados, de forma a garantir tanto sigilo quanto autenticidade e integridade. Implementações criptográficas não possuem apenas exatidão e eficiência como seus principais objetivos: sistemas computacionais podem vazar informação sobre seu com- portamento interno, de forma que uma má implementação pode comprometer a segu- rança de um bom algoritmo. Dessa forma, esta dissertação tem o objetivo de estudar as formas de implementar corretamente algoritmos criptográficos e os métodos para optimizá-los sem que percam suas características de segurança. Um aspecto impor- tante a ser levado em consideração quando implementando algoritmos é a arquitetura alvo. Nesta dissertação concentra-se na família de processadores ARM. ARM é uma das arquiteturas mais utilizadas no mundo, com mais de 100 bilhões de chips vendidos. Esta dissertação foca em estudar e implementar duas famílias de cifradores auten- ticados: NORX e Ascon, especificamente para processadores ARM Cortex-A de 32 e 64 bits. Descrevemos uma técnica de optimização orientada a pipeline para NORX que possui desempenho melhor que o atual estado da arte, e discutimos técnicas utilizadas em uma implementação vetorial do NORX. Também analisamos as características de uma implementação vetorial do Ascon, assim como uma implementação vetorial de múltiplas mensagensAbstract: Authenticated encryption algorithms are tools used to protect data, in a way that guar- antees both its secrecy, authenticity, and integrity. Cryptographic implementations do not have only correctness and efficiency as its main goals: computer systems can leak information about their internal behavior, and a bad implementation can compromise the security of a good algorithm. Therefore, this dissertation aims to study the forms of correctly and efficiently implementing crypto- graphic algorithms and the methods of optimizing them without losing security char- acteristics. One important aspect to take into account during implementation and opti- mization is the target architecture. In this dissertation, the focus is on the ARM family of processors. ARM is one of the most widespread architectures in the world, with more than 100 billion chips deployed. This dissertation focus on studying and implementing two different families of au- thenticated encryption algorithms: NORX and Ascon, targeting 32-bits and 64-bits ARM Cortex-A processors. We show a pipeline oriented technique to implement NORX that¿s faster than the current state-of-art; and we also discuss the techniques used on a vectorial implementation of NORX. We also describe and analyze the characteristics of a vectorial implementation of Ascon, as well as a multiple message vectorial imple- mentationMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Security of Ubiquitous Computing Systems

    Get PDF
    The chapters in this open access book arise out of the EU Cost Action project Cryptacus, the objective of which was to improve and adapt existent cryptanalysis methodologies and tools to the ubiquitous computing framework. The cryptanalysis implemented lies along four axes: cryptographic models, cryptanalysis of building blocks, hardware and software security engineering, and security assessment of real-world systems. The authors are top-class researchers in security and cryptography, and the contributions are of value to researchers and practitioners in these domains. This book is open access under a CC BY license
    corecore