7 research outputs found

    Decorrelation: a theory for block cipher security

    Get PDF
    Pseudorandomness is a classical model for the security of block ciphers. In this paper we propose convenient tools in order to study it in connection with the Shannon Theory, the Carter-Wegman universal hash functions paradigm, and the Luby-Rackoff approach. This enables the construction of new ciphers with security proofs under specific models. We show how to ensure security against basic differential and linear cryptanalysis and even more general attacks. We propose practical construction scheme

    Decorrelation: A Theory for Block Cipher Security

    Get PDF
    Pseudorandomness is a classical model for the security of block ciphers. In this paper we propose convenient tools in order to study it in connection with the Shannon Theory, the Carter-Wegman universal hash functions paradigm, and the Luby-Rackoff approach. This enables the construction of new ciphers with security proofs under specific models. We show how to ensure security against basic differential and linear cryptanalysis and even more general attacks. We propose practical construction scheme

    Statistical cryptanalysis of block ciphers

    Get PDF
    Since the development of cryptology in the industrial and academic worlds in the seventies, public knowledge and expertise have grown in a tremendous way, notably because of the increasing, nowadays almost ubiquitous, presence of electronic communication means in our lives. Block ciphers are inevitable building blocks of the security of various electronic systems. Recently, many advances have been published in the field of public-key cryptography, being in the understanding of involved security models or in the mathematical security proofs applied to precise cryptosystems. Unfortunately, this is still not the case in the world of symmetric-key cryptography and the current state of knowledge is far from reaching such a goal. However, block and stream ciphers tend to counterbalance this lack of "provable security" by other advantages, like high data throughput and ease of implementation. In the first part of this thesis, we would like to add a (small) stone to the wall of provable security of block ciphers with the (theoretical and experimental) statistical analysis of the mechanisms behind Matsui's linear cryptanalysis as well as more abstract models of attacks. For this purpose, we consider the underlying problem as a statistical hypothesis testing problem and we make a heavy use of the Neyman-Pearson paradigm. Then, we generalize the concept of linear distinguisher and we discuss the power of such a generalization. Furthermore, we introduce the concept of sequential distinguisher, based on sequential sampling, and of aggregate distinguishers, which allows to build sub-optimal but efficient distinguishers. Finally, we propose new attacks against reduced-round version of the block cipher IDEA. In the second part, we propose the design of a new family of block ciphers named FOX. First, we study the efficiency of optimal diffusive components when implemented on low-cost architectures, and we present several new constructions of MDS matrices; then, we precisely describe FOX and we discuss its security regarding linear and differential cryptanalysis, integral attacks, and algebraic attacks. Finally, various implementation issues are considered

    Quantitative security of block ciphers:designs and cryptanalysis tools

    Get PDF
    Block ciphers probably figure in the list of the most important cryptographic primitives. Although they are used for many different purposes, their essential goal is to ensure confidentiality. This thesis is concerned by their quantitative security, that is, by measurable attributes that reflect their ability to guarantee this confidentiality. The first part of this thesis deals with well know results. Starting with Shannon's Theory of Secrecy, we move to practical implications for block ciphers, recall the main schemes on which nowadays block ciphers are based, and introduce the Luby-Rackoff security model. We describe distinguishing attacks and key-recovery attacks against block ciphers and show how to turn the firsts into the seconds. As an illustration, we recall linear cryptanalysis which is a classical example of statistical cryptanalysis. In the second part, we consider the (in)security of block ciphers against statistical cryptanalytic attacks and develop some tools to perform optimal attacks and quantify their efficiency. We start with a simple setting in which the adversary has to distinguish between two sources of randomness and show how an optimal strategy can be derived in certain cases. We proceed with the practical situation where the cardinality of the sample space is too large for the optimal strategy to be implemented and show how this naturally leads to the concept of projection-based distinguishers, which reduce the sample space by compressing the samples. Within this setting, we re-consider the particular case of linear distinguishers and generalize them to sets of arbitrary cardinality. We show how these distinguishers between random sources can be turned into distinguishers between random oracles (or block ciphers) and how, in this setting, one can generalize linear cryptanalysis to Abelian groups. As a proof of concept, we show how to break the block cipher TOY100, introduce the block cipher DEAN which encrypts blocks of decimal digits, and apply the theory to the SAFER block cipher family. In the last part of this thesis, we introduce two new constructions. We start by recalling some essential notions about provable security for block ciphers and about Serge Vaudenay's Decorrelation Theory, and introduce new simple modules for which we prove essential properties that we will later use in our designs. We then present the block cipher C and prove that it is immune against a wide range of cryptanalytic attacks. In particular, we compute the exact advantage of the best distinguisher limited to two plaintext/ciphertext samples between C and the perfect cipher and use it to compute the exact value of the maximum expected linear probability (resp. differential probability) of C which is known to be inversely proportional to the number of samples required by the best possible linear (resp. differential) attack. We then introduce KFC a block cipher which builds upon the same foundations as C but for which we can prove results for higher order adversaries. We conclude both discussions about C and KFC by implementation considerations

    A Salad of Block Ciphers

    Get PDF
    This book is a survey on the state of the art in block cipher design and analysis. It is work in progress, and it has been for the good part of the last three years -- sadly, for various reasons no significant change has been made during the last twelve months. However, it is also in a self-contained, useable, and relatively polished state, and for this reason I have decided to release this \textit{snapshot} onto the public as a service to the cryptographic community, both in order to obtain feedback, and also as a means to give something back to the community from which I have learned much. At some point I will produce a final version -- whatever being a ``final version\u27\u27 means in the constantly evolving field of block cipher design -- and I will publish it. In the meantime I hope the material contained here will be useful to other people
    corecore