14 research outputs found

    Ciphertext-Policy Attribute-Based Broadcast Encryption with Small Keys

    Get PDF
    Broadcasting is a very efficient way to securely transmit information to a large set of geographically scattered receivers, and in practice, it is often the case that these receivers can be grouped in sets sharing common characteristics (or attributes). We describe in this paper an efficient ciphertext-policy attribute-based broadcast encryption scheme (CP-ABBE) supporting negative attributes and able to handle access policies in conjunctive normal form (CNF). Essentially, our scheme is a combination of the Boneh-Gentry-Waters broadcast encryption and of the Lewko-Sahai-Waters revocation schemes; the former is used to express attribute-based access policies while the latter is dedicated to the revocation of individual receivers. Our scheme is the first one that involves a public key and private keys having a size that is independent of the number of receivers registered in the system. Its selective security is proven with respect to the Generalized Diffie-Hellman Exponent (GDHE) problem on bilinear groups

    Generating graphs packed with paths: Estimation of linear approximations and differentials:Estimation of linear approximations and differentials

    Get PDF
    When designing a new symmetric-key primitive, the designer must show resistance to known attacks. Perhaps most prominent amongst these are linear and differential cryptanalysis. However, it is notoriously difficult to accurately demonstrate e.g. a block cipher’s resistance to these attacks, and thus most designers resort to deriving bounds on the linear correlations and differential probabilities of their design. On the other side of the spectrum, the cryptanalyst is interested in accurately assessing the strength of a linear or differential attack. While several tools have been developed to search for optimal linear and differential trails, e.g. MILP and SAT based methods, only few approaches specifically try to find as many trails of a single approximation or differential as possible. This can result in an overestimate of a cipher’s resistance to linear and differential attacks, as was for example the case for PRESENT. In this work, we present a new algorithm for linear and differential trail search. The algorithm represents the problem of estimating approximations and differentials as the problem of finding many long paths through a multistage graph. We demonstrate that this approach allows us to find a very large number of good trails for each approximation or differential. Moreover, we show how the algorithm can be used to efficiently estimate the key dependent correlation distribution of a linear approximation, facilitating advanced linear attacks. We apply the algorithm to 17 different ciphers, and present new and improved results on several of these

    More Accurate Differential Properties of LED64 and Midori64

    Get PDF
    In differential cryptanalysis, a differential is more valuable than the single trail belonging to it in general. The traditional way to compute the probability of the differential is to sum the probabilities of all trails within it. The automatic tool for the search of differentials based on Mixed Integer Linear Programming (MILP) has been proposed and realises the task of finding multiple trails of a given differential. The problem is whether it is reliable to evaluate the probability of the differential traditionally. In this paper, we focus on two lightweight block ciphers – LED64 and Midori64 and show the more accurate estimation of differential probability considering the key schedule. Firstly, an automated tool based on Boolean Satisfiability Problem (SAT) is put forward to accomplish the automatic search of differentials for ciphers with S-boxes and is applied to LED64 and Midori64. Secondly, we provide an automatic approach to detect the right pairs following a given differential, which can be exploited to calculate the differential property. Applying this technique to the STEP function of LED64, we discover some differentials with enhanced probability. As a result, the previous attacks relying upon high probability differentials can be improved definitely. Thirdly, we present a method to compute an upper-bound of the weak-key ratio for a given differential, which is utilised to analyse 4-round differentials of Midori64. We detect two differentials whose weak-key ratios are much lower than the expected 50%. More than 78% of the keys will make these two differentials being impossible differentials. The idea of the estimation for an upper-bound of the weak-key ratio can be employed for other ciphers and allows us to launch differential attacks more reliably. Finally, we introduce how to compute the enhanced differential probability and evaluate the size of keys achieving the improved probability. Such a property may incur an efficient weak-key attack. For a 4-round differential of Midori64, we obtain an improved differential property for a portion of keys

    Design and analysis of a distributed ECDSA signing service

    Get PDF
    We present and analyze a new protocol that provides a distributed ECDSA signing service, with the following properties: * it works in an asynchronous communication model; * it works with nn parties with up to f<n/3f < n/3 Byzantine corruptions; * it provides guaranteed output delivery; * it provides a very efficient, non-interactive online signing phase; * it supports additive key derivation according to the BIP32 standard. While there has been a flurry of recent research on distributed ECDSA signing protocols, none of these newly designed protocols provides guaranteed output delivery over an asynchronous communication network; moreover, the performance of our protocol (in terms of asymptotic communication and computational complexity) meets or beats the performance of any of these other protocols. This service is being implemented and integrated into the architecture of the Internet Computer, enabling smart contracts running on the Internet Computer to securely hold and spend Bitcoin and other cryptocurrencies. Along the way, we present some results of independent interest: * a new asynchronous verifiable secret sharing (AVSS) scheme that is simple and efficient; * a new scheme for multi-recipient encryption that is simple and efficient

    Cryptanalysis of Some Block Cipher Constructions

    Get PDF
    When the public-key cryptography was introduced in the 1970s, symmetric-key cryptography was believed to soon become outdated. Nevertheless, we still heavily rely on symmetric-key primitives as they give high-speed performance. They are used to secure mobile communication, e-commerce transactions, communication through virtual private networks and sending electronic tax returns, among many other everyday activities. However, the security of symmetric-key primitives does not depend on a well-known hard mathematical problem such as the factoring problem, which is the basis of the RSA public-key cryptosystem. Instead, the security of symmetric-key primitives is evaluated against known cryptanalytic techniques. Accordingly, the topic of furthering the state-of-the-art of cryptanalysis of symmetric-key primitives is an ever-evolving topic. Therefore, this thesis is dedicated to the cryptanalysis of symmetric-key cryptographic primitives. Our focus is on block ciphers as well as hash functions that are built using block ciphers. Our contributions can be summarized as follows: First, we tackle the limitation of the current Mixed Integer Linear Programming (MILP) approaches to represent the differential propagation through large S-boxes. Indeed, we present a novel approach that can efficiently model the Difference Distribution Table (DDT) of large S-boxes, i.e., 8-bit S-boxes. As a proof of the validity and efficiency of our approach, we apply it on two out of the seven AES-round based constructions that were recently proposed in FSE 2016. Using our approach, we improve the lower bound on the number of active S-boxes of one construction and the upper bound on the best differential characteristic of the other. Then, we propose meet-in-the-middle attacks using the idea of efficient differential enumeration against two Japanese block ciphers, i.e., Hierocrypt-L1 and Hierocrypt-3. Both block ciphers were submitted to the New European Schemes for Signatures, Integrity, and Encryption (NESSIE) project, selected as one of the Japanese e-Government recommended ciphers in 2003 and reselected in the candidate recommended ciphers list in 2013. We construct five S-box layer distinguishers that we use to recover the master keys of reduced 8 S-box layer versions of both block ciphers. In addition, we present another meet-in-the-middle attack on Hierocrypt-3 with slightly higher time and memory complexities but with much less data complexity. Afterwards, we shift focus to another equally important cryptanalytic attack, i.e., impossible differential attack. SPARX-64/128 is selected among the SPARX family that was recently proposed to provide ARX based block cipher whose security against differential and linear cryptanalysis can be proven. We assess the security of SPARX-64/128 against impossible differential attack and show that it can reach the same number of rounds the division-based integral attack, proposed by the designers, can reach. Then, we pick Kiasu-BC as an example of a tweakable block cipher and prove that, on contrary to its designers’ claim, the freedom in choosing the publicly known tweak decreases its security margin. Lastly, we study the impossible differential properties of the underlying block cipher of the Russian hash standard Streebog and point out the potential risk in using it as a MAC scheme in the secret-IV mode

    LPN in Cryptography:an Algorithmic Study

    Get PDF
    The security of public-key cryptography relies on well-studied hard problems, problems for which we do not have efficient algorithms. Factorization and discrete logarithm are the two most known and used hard problems. Unfortunately, they can be easily solved on a quantum computer by Shor's algorithm. Also, the research area of cryptography demands for crypto-diversity which says that we should offer a range of hard problems for public-key cryptography. If one hard problem proves to be easy, we should be able to provide alternative solutions. Some of the candidates for post-quantum hard problems, i.e. problems which are believed to be hard even on a quantum computer, are the Learning Parity with Noise (LPN), the Learning with Errors (LWE) and the Shortest Vector Problem (SVP). A thorough study of these problems is needed in order to assess their hardness. In this thesis we focus on the algorithmic study of LPN. LPN is a hard problem that is attractive, as it is believed to be post-quantum resistant and suitable for lightweight devices. In practice, it has been employed in several encryption schemes and authentication protocols. At the beginning of this thesis, we take a look at the existing LPN solving algorithms. We provide the theoretical analysis that assesses their complexity. We compare the theoretical results with practice by implementing these algorithms. We study the efficiency of all LPN solving algorithms which allow us to provide secure parameters that can be used in practice. We push further the state of the art by improving the existing algorithms with the help of two new frameworks. In the first framework, we split an LPN solving algorithm into atomic steps. We study their complexity, how they impact the other steps and we construct an algorithm that optimises their use. Given an LPN instance that is characterized by the noise level and the secret size, our algorithm provides the steps to follow in order to solve the instance with optimal complexity. In this way, we can assess if an LPN instance provides the security we require and we show what are the secure instances for the applications that rely on LPN. The second framework handles problems that can be decomposed into steps of equal complexity. Here, we assume that we have an adversary that has access to a finite or infinite number of instances of the same problem. The goal of the adversary is to succeed in just one instance as soon as possible. Our framework provides the strategy that achieves this. We characterize an LPN solving algorithm in this framework and show that we can improve its complexity in the scenario where the adversary is restricted. We show that other problems, like password guessing, can be modeled in the same framework

    Design and Analysis of Opaque Signatures

    Get PDF
    Digital signatures were introduced to guarantee the authenticity and integrity of the underlying messages. A digital signature scheme comprises the key generation, the signature, and the verification algorithms. The key generation algorithm creates the signing and the verifying keys, called also the signer’s private and public keys respectively. The signature algorithm, which is run by the signer, produces a signature on the input message. Finally, the verification algorithm, run by anyone who knows the signer’s public key, checks whether a purported signature on some message is valid or not. The last property, namely the universal verification of digital signatures is undesirable in situations where the signed data is commercially or personally sensitive. Therefore, mechanisms which share most properties with digital signatures except for the universal verification were invented to respond to the aforementioned need; we call such mechanisms “opaque signatures”. In this thesis, we study the signatures where the verification cannot be achieved without the cooperation of a specific entity, namely the signer in case of undeniable signatures, or the confirmer in case of confirmer signatures; we make three main contributions. We first study the relationship between two security properties important for public key encryption, namely data privacy and key privacy. Our study is motivated by the fact that opaque signatures involve always an encryption layer that ensures their opacity. The properties required for this encryption vary according to whether we want to protect the identity (i.e. the key) of the signer or hide the validity of the signature. Therefore, it would be convenient to use existing work about the encryption scheme in order to derive one notion from the other. Next, we delve into the generic constructions of confirmer signatures from basic cryptographic primitives, e.g. digital signatures, encryption, or commitment schemes. In fact, generic constructions give easy-to-understand and easy-to-prove schemes, however, this convenience is often achieved at the expense of efficiency. In this contribution, which constitutes the core of this thesis, we first analyze the already existing constructions; our study concludes that the popular generic constructions of confirmer signatures necessitate strong security assumptions on the building blocks, which impacts negatively the efficiency of the resulting signatures. Next, we show that a small change in these constructionsmakes these assumptions drop drastically, allowing as a result constructions with instantiations that compete with the dedicated realizations of these signatures. Finally, we revisit two early undeniable signatures which were proposed with a conjectural security. We disprove the claimed security of the first scheme, and we provide a fix to it in order to achieve strong security properties. Next, we upgrade the second scheme so that it supports a iii desirable feature, and we provide a formal security treatment of the new scheme: we prove that it is secure assuming new reasonable assumptions on the underlying constituents
    corecore