6 research outputs found

    Tweakable HCTR: A BBB Secure Tweakable Enciphering Scheme

    Get PDF
    \textsf{HCTR}, proposed by Wang et al., is one of the most efficient candidates of tweakable enciphering schemes that turns an nn-bit block cipher into a variable input length tweakable block cipher. Wang et al. have shown that \textsf{HCTR} offers a cubic security bound against all adaptive chosen plaintext and chosen ciphertext adversaries. Later in FSE 2008, Chakraborty and Nandi have improved its bound to O(σ2/2n)O(\sigma^2 / 2^n), where σ\sigma is the total number of blocks queried and nn is the block size of the block cipher. In this paper, we propose \textbf{tweakable \textsf{HCTR}} that turns an nn-bit tweakable block cipher to a variable input length tweakable block cipher by replacing all the block cipher calls of \textsf{HCTR} with tweakable block cipher. We show that when there is no repetition of the tweak, tweakable \textsf{HCTR} enjoys the optimal security against all adaptive chosen plaintext and chosen ciphertext adversaries. However, if the repetition of the tweak is limited, then the security of the construction remains close to the security bound in no repetition of the tweak case. Hence, it gives a graceful security degradation with the maximum number of repetition of tweaks

    Security in banking

    Get PDF
    We examine the security of the Australian card payment system by analysing existing cryptographic protocols. In this analysis, we examine TDES and DES-V key derivation and the use of secure cryptographic devices, then contrast this with alternative mechanisms to enable secure card payments. We compare current Australian cryptographic methods with their international counterparts, such as the ANSI methods, and then motivate alternative methods for authenticated encryption in card payment systems

    DEFAULT : cipher level resistance against differential fault attack

    Get PDF
    Differential Fault Analysis (DFA) is a well known cryptanalytic tech- nique that exploits faulty outputs of an encryption device. Despite its popularity and similarity with the classical Differential Analysis (DA), a thorough analysis explaining DFA from a designer’s point-of-view is missing in the literature. To the best of our knowledge, no DFA immune block cipher at an algorithmic level has been proposed so far. Furthermore, all known DFA countermeasures somehow depend on the device/protocol or on the implementation such as duplication/comparison. As all of these are outside the scope of the cipher designer, we focus on designing a primitive which can protect from DFA on its own. We present the first concept of cipher level DFA resistance which does not rely on any device/protocol related assumption, nor does it depend on any form of duplication. Our construction is simple, software/hardware friendly and DFA security scales up with the state size. It can be plugged before and/or after (almost) any symmetric key cipher and will ensure a non-trivial search complexity against DFA. One key component in our DFA protection layer is an SBox with linear structures. Such SBoxes have never been used in cipher design as they generally perform poorly against differential attacks. We argue that they in fact represent an interesting trade-off between good cryptographic properties and DFA resistance. As a proof of concept, we construct a DFA protecting layer, named DEFAULT-LAYER, as well as a full-fledged block cipher DEFAULT. Our solutions compare favorably to the state-of-the-art, offering advantages over the sophisticated duplication based solutions like impeccable circuits/CRAFT or infective countermeasures

    Cryptanalysis, Reverse-Engineering and Design of Symmetric Cryptographic Algorithms

    Get PDF
    In this thesis, I present the research I did with my co-authors on several aspects of symmetric cryptography from May 2013 to December 2016, that is, when I was a PhD student at the university of Luxembourg under the supervision of Alex Biryukov. My research has spanned three different areas of symmetric cryptography. In Part I of this thesis, I present my work on lightweight cryptography. This field of study investigates the cryptographic algorithms that are suitable for very constrained devices with little computing power such as RFID tags and small embedded processors such as those used in sensor networks. Many such algorithms have been proposed recently, as evidenced by the survey I co-authored on this topic. I present this survey along with attacks against three of those algorithms, namely GLUON, PRINCE and TWINE. I also introduce a new lightweight block cipher called SPARX which was designed using a new method to justify its security: the Long Trail Strategy. Part II is devoted to S-Box reverse-engineering, a field of study investigating the methods recovering the hidden structure or the design criteria used to build an S-Box. I co-invented several such methods: a statistical analysis of the differential and linear properties which was applied successfully to the S-Box of the NSA block cipher Skipjack, a structural attack against Feistel networks called the yoyo game and the TU-decomposition. This last technique allowed us to decompose the S-Box of the last Russian standard block cipher and hash function as well as the only known solution to the APN problem, a long-standing open question in mathematics. Finally, Part III presents a unifying view of several fields of symmetric cryptography by interpreting them as purposefully hard. Indeed, several cryptographic algorithms are designed so as to maximize the code size, RAM consumption or time taken by their implementations. By providing a unique framework describing all such design goals, we could design modes of operations for building any symmetric primitive with any form of hardness by combining secure cryptographic building blocks with simple functions with the desired form of hardness called plugs. Alex Biryukov and I also showed that it is possible to build plugs with an asymmetric hardness whereby the knowledge of a secret key allows the privileged user to bypass the hardness of the primitive

    Biosensors

    Get PDF
    A biosensor is defined as a detecting device that combines a transducer with a biologically sensitive and selective component. When a specific target molecule interacts with the biological component, a signal is produced, at transducer level, proportional to the concentration of the substance. Therefore biosensors can measure compounds present in the environment, chemical processes, food and human body at low cost if compared with traditional analytical techniques. This book covers a wide range of aspects and issues related to biosensor technology, bringing together researchers from 11 different countries. The book consists of 16 chapters written by 53 authors. The first four chapters describe several aspects of nanotechnology applied to biosensors. The subsequent section, including three chapters, is devoted to biosensor applications in the fields of drug discovery, diagnostics and bacteria detection. The principles behind optical biosensors and some of their application are discussed in chapters from 8 to 11. The last five chapters treat of microelectronics, interfacing circuits, signal transmission, biotelemetry and algorithms applied to biosensing

    Unclonability and quantum cryptanalysis: from foundations to applications

    Get PDF
    The impossibility of creating perfect identical copies of unknown quantum systems is a fundamental concept in quantum theory and one of the main non-classical properties of quantum information. This limitation imposed by quantum mechanics, famously known as the no-cloning theorem, has played a central role in quantum cryptography as a key component in the security of quantum protocols. In this thesis, we look at \emph{Unclonability} in a broader context in physics and computer science and more specifically through the lens of cryptography, learnability and hardware assumptions. We introduce new notions of unclonability in the quantum world, namely \emph{quantum physical unclonability}, and study the relationship with cryptographic properties and assumptions such as unforgeability, randomness and pseudorandomness. The purpose of this study is to bring new insights into the field of quantum cryptanalysis and into the notion of unclonability itself. We also discuss applications of this new type of unclonability as a cryptographic resource for designing provably secure quantum protocols. First, we study the unclonability of quantum processes and unitaries in relation to their learnability and unpredictability. The instinctive idea of unpredictability from a cryptographic perspective is formally captured by the notion of \emph{unforgeability}. Intuitively, unforgeability means that an adversary should not be able to produce the output of an \emp{unknown} function or process from a limited number of input-output samples of it. Even though this notion is almost easily formalized in classical cryptography, translating it to the quantum world against a quantum adversary has been proven challenging. One of our contributions is to define a new unified framework to analyse the unforgeability property for both classical and quantum schemes in the quantum setting. This new framework is designed in such a way that can be readily related to the novel notions of unclonability that we will define in the following chapters. Another question that we try to address here is "What is the fundamental property that leads to unclonability?" In attempting to answer this question, we dig into the relationship between unforgeability and learnability, which motivates us to repurpose some learning tools as a new cryptanalysis toolkit. We introduce a new class of quantum attacks based on the concept of `emulation' and learning algorithms, breaking new ground for more sophisticated and complicated algorithms for quantum cryptanalysis. Second, we formally represent, for the first time, the notion of physical unclonability in the quantum world by introducing \emph{Quantum Physical Unclonable Functions (qPUF)} as the quantum analogue of Physical Unclonable Functions (PUF). PUF is a hardware assumption introduced previously in the literature of hardware security, as physical devices with unique behaviour, due to manufacturing imperfections and natural uncontrollable disturbances that make them essentially hard to reproduce. We deliver the mathematical model for qPUFs, and we formally study their main desired cryptographic property, namely unforgeability, using our previously defined unforgeability framework. In light of these new techniques, we show several possibility and impossibility results regarding the unforgeability of qPUFs. We will also discuss how the quantum version of physical unclonability relates to randomness and unknownness in the quantum world, exploring further the extended notion of unclonability. Third, we dive deeper into the connection between physical unclonability and related hardware assumptions with quantum pseudorandomness. Like unclonability in quantum information, pseudorandomness is also a fundamental concept in cryptography and complexity. We uncover a deep connection between Pseudorandom Unitaries (PRU) and quantum physical unclonable functions by proving that both qPUFs and the PRU can be constructed from each other. We also provide a novel route towards realising quantum pseudorandomness, distinct from computational assumptions. Next, we propose new applications of unclonability in quantum communication, using the notion of physical unclonability as a new resource to achieve provably secure quantum protocols against quantum adversaries. We propose several protocols for mutual entity identification in a client-server or quantum network setting. Authentication and identification are building-block tasks for quantum networks, and our protocols can provide new resource-efficient applications for quantum communications. The proposed protocols use different quantum and hybrid (quantum-classical) PUF constructions and quantum resources, which we compare and attempt in reducing, as much as possible throughout the various works we present. Specifically, our hybrid construction can provide quantum security using limited quantum communication resources that cause our protocols to be implementable and practical in the near term. Finally, we present a new practical cryptanalysis technique concerning the problem of approximate cloning of quantum states. We propose variational quantum cloning (\VQC), a quantum machine learning-based cryptanalysis algorithm which allows an adversary to obtain optimal (approximate) cloning strategies with short depth quantum circuits, trained using the hybrid classical-quantum technique. This approach enables the end-to-end discovery of hardware efficient quantum circuits to clone specific families of quantum states, which has applications in the foundations and cryptography. In particular, we use a cloning-based attack on two quantum coin-flipping protocols and show that our algorithm can improve near term attacks on these protocols, using approximate quantum cloning as a resource. Throughout this work, we demonstrate how the power of quantum learning tools as attacks on one hand, and the power of quantum unclonability as a security resource, on the other hand, fight against each other to break and ensure security in the near term quantum era
    corecore