5 research outputs found

    Spectral characterization of iterating lossy mappings

    Get PDF
    In this paper we study what happens to sets when we iteratively apply lossy (round) mappings to them. We describe the information loss as imbalances of parities of intermediate distributions and show that their evolution is governed by the correlation matrices of the mappings. At the macroscopic level we show that iterating lossy mappings results in an increase of a quantity we call total imbalance . We quantify the increase in total imbalance as a function of the number of iterations and of round mapping characteristics. At the microscopic level we show that the imbalance of a parity located in some round, dubbed final , is the sum of distinct terms. Each of these terms consists of the imbalance of a parity located at the output of a round, multiplied by the sum of the correlation contributions of all linear trails between that parity and the final parity. We illustrate our theory with experimental data. The developed theory can be applied whenever lossy mappings are repeatedly applied to a state. This is the case in many modes of block ciphers and permutations for, e.g., iterated hashing or self-synchronizing stream encryption. The main reason why we have developed it however, is for applying it to study the security implications of using non-uniform threshold schemes as countermeasure against differential power and electromagnetic analysis

    Changing of the Guards: a simple and efficient method for achieving uniformity in threshold sharing

    Get PDF
    Since they were first proposed as a countermeasure against differential power analysis (DPA) in 2006, threshold schemes have attracted a lot of attention from the community concentrating on cryptographic implementations. What makes threshold schemes so attractive from an academic point of view is that they come with an information-theoretic proof of resistance against a specific subset of side-channel attacks: first-order DPA. From an industrial point of view they are attractive as a careful threshold implementation forces adversaries to DPA of higher order, with all its problems such a noise amplification. A threshold scheme that offers the mentioned provable security must exhibit three properties: correctness, incompleteness and uniformity. A threshold scheme becomes more expensive with the number of shares that must be implemented and the required number of shares is lower bound by the algebraic degree of the function being shared plus 1. Defining a correct and incomplete sharing of a function of degree d in d+1 shares is straightforward. However, up to now there is no generic method to achieve uniformity and finding uniform sharings of degree-d functions with d+1 shares is an active research area. In this paper we present a simple and relatively cheap method to find a correct, incomplete and uniform d+1-share threshold scheme for any S-box layer consisting of degree-d invertible S-boxes. The uniformity is not implemented in the sharings of the individual S-boxes but rather at the S-box layer level by the use of feed-forward and some expansion of shares. When applied to the Keccak-p nonlinear step Chi, its cost is very small

    Shuffle and Mix: On the Diffusion of Randomness in Threshold Implementations of Keccak

    Get PDF
    Threshold Implementations are well-known as a provably firstorder secure Boolean masking scheme even in the presence of glitches. A precondition for their security proof is a uniform input distribution at each round function, which may require an injection of fresh randomness or an increase in the number of shares. However, it is unclear whether violating the uniformity assumption causes exploitable leakage in practice. Recently, Daemen undertook a theoretical study of lossy mappings to extend the understanding of uniformity violations. We complement his work by entropy simulations and practical measurements of Keccak’s round function. Our findings shed light on the necessity of mixing operations in addition to bit-permutations in a cipher’s linear layer to propagate randomness between S-boxes and prevent exploitable leakage. Finally, we argue that this result cannot be obtained by current simulation methods, further stressing the continued need for practical leakage measurements

    Cryptanalysis of Masked Ciphers: A not so Random Idea

    Get PDF
    A new approach to the security analysis of hardware-oriented masked ciphers against second-order side-channel attacks is developed. By relying on techniques from symmetric-key cryptanalysis, concrete security bounds are obtained in a variant of the probing model that allows the adversary to make only a bounded, but possibly very large, number of measurements. Specifically, it is formally shown how a bounded-query variant of robust probing security can be reduced to the linear cryptanalysis of masked ciphers. As a result, the compositional issues of higher-order threshold implementations can be overcome without relying on fresh randomness. From a practical point of view, the aforementioned approach makes it possible to transfer many of the desirable properties of first-order threshold implementations, such as their low randomness usage, to the second-order setting. For example, a straightforward application to the block cipher LED results in a masking using less than 700 random bits including the initial sharing. In addition, the cryptanalytic approach introduced in this paper provides additional insight into the design of masked ciphers and allows for a quantifiable trade-off between security and performance

    Cryptanalysis, Reverse-Engineering and Design of Symmetric Cryptographic Algorithms

    Get PDF
    In this thesis, I present the research I did with my co-authors on several aspects of symmetric cryptography from May 2013 to December 2016, that is, when I was a PhD student at the university of Luxembourg under the supervision of Alex Biryukov. My research has spanned three different areas of symmetric cryptography. In Part I of this thesis, I present my work on lightweight cryptography. This field of study investigates the cryptographic algorithms that are suitable for very constrained devices with little computing power such as RFID tags and small embedded processors such as those used in sensor networks. Many such algorithms have been proposed recently, as evidenced by the survey I co-authored on this topic. I present this survey along with attacks against three of those algorithms, namely GLUON, PRINCE and TWINE. I also introduce a new lightweight block cipher called SPARX which was designed using a new method to justify its security: the Long Trail Strategy. Part II is devoted to S-Box reverse-engineering, a field of study investigating the methods recovering the hidden structure or the design criteria used to build an S-Box. I co-invented several such methods: a statistical analysis of the differential and linear properties which was applied successfully to the S-Box of the NSA block cipher Skipjack, a structural attack against Feistel networks called the yoyo game and the TU-decomposition. This last technique allowed us to decompose the S-Box of the last Russian standard block cipher and hash function as well as the only known solution to the APN problem, a long-standing open question in mathematics. Finally, Part III presents a unifying view of several fields of symmetric cryptography by interpreting them as purposefully hard. Indeed, several cryptographic algorithms are designed so as to maximize the code size, RAM consumption or time taken by their implementations. By providing a unique framework describing all such design goals, we could design modes of operations for building any symmetric primitive with any form of hardness by combining secure cryptographic building blocks with simple functions with the desired form of hardness called plugs. Alex Biryukov and I also showed that it is possible to build plugs with an asymmetric hardness whereby the knowledge of a secret key allows the privileged user to bypass the hardness of the primitive
    corecore