18 research outputs found

    Curves, codes, and cryptography

    Get PDF
    This thesis deals with two topics: elliptic-curve cryptography and code-based cryptography. In 2007 elliptic-curve cryptography received a boost from the introduction of a new way of representing elliptic curves. Edwards, generalizing an example from Euler and Gauss, presented an addition law for the curves x2 + y2 = c2(1 + x2y2) over non-binary fields. Edwards showed that every elliptic curve can be expressed in this form as long as the underlying field is algebraically closed. Bernstein and Lange found fast explicit formulas for addition and doubling in coordinates (X : Y : Z) representing (x, y) = (X/Z, Y/Z) on these curves, and showed that these explicit formulas save time in elliptic-curve cryptography. It is easy to see that all of these curves are isomorphic to curves x2 + y2 = 1 + dx2y2 which now are called "Edwards curves" and whose shape covers considerably more elliptic curves over a finite field than x2 + y2 = c2(1 + x2y2). In this thesis the Edwards addition law is generalized to cover all curves ax2 +y2 = 1+dx2y2 which now are called "twisted Edwards curves." The fast explicit formulas for addition and doubling presented here are almost as fast in the general case as they are for the special case a = 1. This generalization brings the speed of the Edwards addition law to every Montgomery curve. Tripling formulas for Edwards curves can be used for double-base scalar multiplication where a multiple of a point is computed using a series of additions, doublings, and triplings. The use of double-base chains for elliptic-curve scalar multiplication for elliptic curves in various shapes is investigated in this thesis. It turns out that not only are Edwards curves among the fastest curve shapes, but also that the speed of doublings on Edwards curves renders double bases obsolete for this curve shape. Elliptic curves in Edwards form and twisted Edwards form can be used to speed up the Elliptic-Curve Method for integer factorization (ECM). We show how to construct elliptic curves in Edwards form and twisted Edwards form with large torsion groups which are used by the EECM-MPFQ implementation of ECM. Code-based cryptography was invented by McEliece in 1978. The McEliece public-key cryptosystem uses as public key a hidden Goppa code over a finite field. Encryption in McEliece’s system is remarkably fast (a matrix-vector multiplication). This system is rarely used in implementations. The main complaint is that the public key is too large. The McEliece cryptosystem recently regained attention with the advent of post-quantum cryptography, a new field in cryptography which deals with public-key systems without (known) vulnerabilities to attacks by quantum computers. The McEliece cryptosystem is one of them. In this thesis we underline the strength of the McEliece cryptosystem by improving attacks against it and by coming up with smaller-key variants. McEliece proposed to use binary Goppa codes. For these codes the most effective attacks rely on information-set decoding. In this thesis we present an attack developed together with Daniel J. Bernstein and Tanja Lange which uses and improves Stern’s idea of collision decoding. This attack is faster by a factor of more than 150 than previous attacks, bringing it within reach of a moderate computer cluster. We were able to extract a plaintext from a ciphertext by decoding 50 errors in a [1024, 524] binary code. The attack should not be interpreted as destroying the McEliece cryptosystem. However, the attack demonstrates that the original parameters were chosen too small. Building on this work the collision-decoding algorithm is generalized in two directions. First, we generalize the improved collision-decoding algorithm for codes over arbitrary fields and give a precise analysis of the running time. We use the analysis to propose parameters for the McEliece cryptosystem with Goppa codes over fields such as F31. Second, collision decoding is generalized to ball-collision decoding in the case of binary linear codes. Ball-collision decoding is asymptotically faster than any previous attack against the McEliece cryptosystem. Another way to strengthen the system is to use codes with a larger error-correction capability. This thesis presents "wild Goppa codes" which contain the classical binary Goppa codes as a special case. We explain how to encrypt and decrypt messages in the McEliece cryptosystem when using wild Goppa codes. The size of the public key can be reduced by using wild Goppa codes over moderate fields which is explained by evaluating the security of the "Wild McEliece" cryptosystem against our generalized collision attack for codes over finite fields. Code-based cryptography not only deals with public-key cryptography: a code-based hash function "FSB"was submitted to NIST’s SHA-3 competition, a competition to establish a new standard for cryptographic hashing. Wagner’s generalized birthday attack is a generic attack which can be used to find collisions in the compression function of FSB. However, applying Wagner’s algorithm is a challenge in storage-restricted environments. The FSBday project showed how to successfully mount the generalized birthday attack on 8 nodes of the Coding and Cryptography Computer Cluster (CCCC) at Technische Universiteit Eindhoven to find collisions in the toy version FSB48 which is contained in the submission to NIST

    Brute-Force Cryptanalysis with Aging Hardware: Controlling Half the Output of SHA-256

    Get PDF
    This paper describes a "three-way collision" on SHA-256 truncated to 128 bits. More precisely, it gives three random-looking bit strings whose hashes by SHA-256 maintain a non-trivial relation: their XOR starts with 128 zero bits. They have been found by brute-force, without exploiting any cryptographic weakness in the hash function itself. This shows that birthday-like computations on 128 bits are becoming increasingly feasible, even for academic teams without substantial means. These bit strings have been obtained by solving a large instance of the three-list generalized birthday problem, a difficult case known as the 3XOR problem. The whole computation consisted of two equally challenging phases: assembling the 3XOR instance and solving it. It was made possible by the combination of: 1) recent progress on algorithms for the 3XOR problem, 2) creative use of "dedicated" hardware accelerators, 3) adapted implementations of 3XOR algorithms that could run on massively parallel machines. Building the three lists required 2 67.6 evaluations of the compression function of SHA-256. They were performed in 7 calendar months by two obsolete secondhand bitcoin mining devices, which can now be acquired on eBay for about 80e. The actual instance of the 3XOR problem was solved in 300 CPU years on a 7-year old IBM Bluegene/Q computer, a few weeks before it was scrapped. To the best of our knowledge, this is the first explicit 128-bit collision-like result for SHA-256. It is the first bitcoin-accelerated cryptanalytic computation and it is also one of the largest public ones

    Another Look at the Cost of Cryptographic Attacks

    Get PDF
    This paper makes the case for considering the cost of cryptographic attacks as the main measure of their efficiency, instead of their time complexity. This allows, in our opinion, a more realistic assessment of the "risk" these attacks represent. This is half-and-half a position and a technical paper. Cryptographic attacks described in the literature are rarely implemented. Most exist only "on paper", and their main characteristic is that their estimated time complexity is small enough to break a given security property. However, when a cryptanalyst actually considers implementing an attack, she soon realizes that there is more to the story than time complexity. For instance, Wiener has shown that breaking the double-DES costs 2 6n/5 , asymptotically more than exhaustive search on n bits. We put forward the asymptotic cost of cryptographic attacks as a measure of their practicality. We discuss the shortcomings of the usual computational model and propose a simple abstract cryptographic machine on which it is easy to estimate the cost. We then study the asymptotic cost of several relevant algorithm: collision search, the three-list birthday problem (3XOR) and solving multivariate quadratic polynomial equations. We find that some smart algorithms cost much more than what their time complexity suggest, while naive and simple algorithms may cost less. Some algorithms can be tuned to reduce their cost (this increases their time complexity). Foreword A celebrated High Performance Computing paper entitled "Hitting the Memory Wall: Implications of the Obvious" [47] opens with these words: This brief note points out something obvious-something the authors "knew" without really understanding. With apologies to those who did understand, we offer it to those others who, like us, missed the point. We would like to do the same-but this note is not so short

    Refinements of the k-tree Algorithm for the Generalized Birthday Problem

    Get PDF
    We study two open problems proposed by Wagner in his seminal work on the generalized birthday problem. First, with the use of multicollisions, we improve Wagner\u27s 33-tree algorithm. The new 3-tree only slightly outperforms Wagner\u27s 3-tree, however, in some applications this suffices, and as a proof of concept, we apply the new algorithm to slightly reduce the security of two CAESAR proposals. Next, with the use of multiple collisions based on Hellman\u27s table, we give improvements to the best known time-memory tradeoffs for the k-tree. As a result, we obtain the a new tradeoff curve T^2 \cdot M^{\lg k -1} = k \cdot N. For instance, when k=4, the tradeoff has the form T^2 M = 4 \cdot N

    Parallel cryptanalysis

    Get PDF
    Most of today’s cryptographic primitives are based on computations that are hard to perform for a potential attacker but easy to perform for somebody who is in possession of some secret information, the key, that opens a back door in these hard computations and allows them to be solved in a small amount of time. To estimate the strength of a cryptographic primitive it is important to know how hard it is to perform the computation without knowledge of the secret back door and to get an understanding of how much money or time the attacker has to spend. Usually a cryptographic primitive allows the cryptographer to choose parameters that make an attack harder at the cost of making the computations using the secret key harder as well. Therefore designing a cryptographic primitive imposes the dilemma of choosing the parameters strong enough to resist an attack up to a certain cost while choosing them small enough to allow usage of the primitive in the real world, e.g. on small computing devices like smart phones. This thesis investigates three different attacks on particular cryptographic systems: Wagner’s generalized birthday attack is applied to the compression function of the hash function FSB. Pollard’s rho algorithm is used for attacking Certicom’s ECC Challenge ECC2K-130. The implementation of the XL algorithm has not been specialized for an attack on a specific cryptographic primitive but can be used for attacking some cryptographic primitives by solving multivariate quadratic systems. All three attacks are general attacks, i.e. they apply to various cryptographic systems; the implementations of Wagner’s generalized birthday attack and Pollard’s rho algorithm can be adapted for attacking other primitives than those given in this thesis. The three attacks have been implemented on different parallel architectures. XL has been parallelized using the Block Wiedemann algorithm on a NUMA system using OpenMP and on an Infiniband cluster using MPI. Wagner’s attack was performed on a distributed system of 8 multi-core nodes connected by an Ethernet network. The work on Pollard’s Rho algorithm is part of a large research collaboration with several research groups; the computations are embarrassingly parallel and are executed in a distributed fashion in several facilities with almost negligible communication cost. This dissertation presents implementations of the iteration function of Pollard’s Rho algorithm on Graphics Processing Units and on the Cell Broadband Engine

    Computational Records with Aging Hardware: Controlling Half the Output of SHA-256

    Get PDF
    SHA-256 is a secure cryptographic hash function. As such, its output should not have any detectable property. This paper describes three bit strings whose hashes by SHA-256 are nevertheless correlated in a non-trivial way: the first half of their hashes XORs to zero. They were found by “brute-force”, without exploiting any cryptographic weakness in the hash function itself. This does not threaten the security of the hash function and does not have any cryptographic implication. This is an example of a large “combinatorial” computation in which at least 8.7 × 10 22 integer operations have been performed. This was made possible by the combination of: 1) recent progress on algorithms for the underlying problem, 2) creative use of dedicated hardware accelerators, 3) adapted implementations of the relevant algorithms that could run on massively parallel machines. The actual computation was done on aging hardware. It required seven calendar months using two obsolete second-hand bitcoin mining devices converted into useful computational devices. A second step required 570 CPU-years on an 8-year old IBM BlueGene/Q computer, a few weeks before it was scrapped. To the best of our knowledge, this is the first practical 128-bit collision-like result obtained by brute-force, and it is the first bitcoin miner-accelerated computation

    Quantum Algorithms for the k-xor Problem

    Get PDF
    International audienceThe k-xor (or generalized birthday) problem is a widely studied question with many applications in cryptography. It aims at finding k elements of n bits, drawn at random, such that the xor of all of them is 0. The algorithms proposed by Wagner more than fifteen years ago remain the best known classical algorithms for solving them, when disregarding logarithmic factors. In this paper we study these problems in the quantum setting, when considering that the elements are created by querying a random function (or k random functions) H : {0, 1} n → {0, 1} n. We consider two scenarios: in one we are able to use a limited amount of quantum memory (i.e. a number O(n) of qubits, the same as the one needed by Grover's search algorithm), and in the other we consider that the algorithm can use an exponential amount of qubits. Our newly proposed algorithms are of general interest. In both settings, they provide the best known quantum time complexities. In particular, we are able to considerately improve the 3-xor algorithm: with limited qubits, we reach a complexity considerably better than what is currently possible for quantum collision search. Furthermore, when having access to exponential amounts of quantum memory, we can take this complexity below O(2 n/3), the well-known lower bound of quantum collision search, clearly improving the best known quantum time complexity also in this setting. We illustrate the importance of these results with some cryptographic applications

    An Algorithmic Framework for the Generalized Birthday Problem

    Get PDF
    The generalized birthday problem (GBP) was introduced by Wagner in 2002 and has shown to have many applications in cryptanalysis. In its typical variant, we are given access to a function H:{0,1}ℓ→{0,1}nH:\{0,1\}^{\ell} \rightarrow \{0,1\}^n (whose specification depends on the underlying problem) and an integer K>0K>0. The goal is to find KK distinct inputs to HH (denoted by {xi}i=1K\{x_i\}_{i=1}^{K}) such that ∑i=1KH(xi)=0\sum_{i=1}^{K}H(x_i) = 0. Wagner\u27s K-tree algorithm solves the problem in time and memory complexities of about N1/(⌊log⁥K⌋+1)N^{1/(\lfloor \log K \rfloor + 1)} (where N=2nN= 2^n). Two important open problems raised by Wagner were (1) devise efficient time-memory tradeoffs for GBP, and (2) reduce the complexity of the K-tree algorithm for KK which is not a power of 2. In this paper, we make progress in both directions. First, we improve the best know GBP time-memory tradeoff curve (published by independently by Nikolić and Sasaki and also by Biryukov and Khovratovich) for all K≄8K \geq 8 from T2M⌊log⁥K⌋−1=NT^2M^{\lfloor \log K \rfloor -1} = N to T⌈(log⁥K)/2⌉+1M⌊(log⁥K)/2⌋=NT^{\lceil (\log K)/2 \rceil + 1 }M^{\lfloor (\log K)/2 \rfloor} = N, applicable for a large range of parameters. For example, for K=8K = 8 we improve the best previous tradeoff from T2M2=NT^2M^2 = N to T3M=NT^3M = N and for K=32K = 32 the improvement is from T2M4=NT^2M^4 = N to T4M2=NT^4M^2 = N. Next, we consider values of KK which are not powers of 2 and show that in many cases even more efficient time-memory tradeoff curves can be obtained. Most interestingly, for K∈{6,7,14,15}K \in \{6,7,14,15\} we present algorithms with the same time complexities as the K-tree algorithm, but with significantly reduced memory complexities. In particular, for K=6K=6 the K-tree algorithm achieves T=M=N1/3T=M=N^{1/3}, whereas we obtain T=N1/3T=N^{1/3} and M=N1/6M=N^{1/6}. For K=14K=14, Wagner\u27s algorithm achieves T=M=N1/4T=M=N^{1/4}, while we obtain T=N1/4T=N^{1/4} and M=N1/8M=N^{1/8}. This gives the first significant improvement over the K-tree algorithm for small KK. Finally, we optimize our techniques for several concrete GBP instances and show how to solve some of them with improved time and memory complexities compared to the state-of-the-art. Our results are obtained using a framework that combines several algorithmic techniques such as variants of the Schroeppel-Shamir algorithm for solving knapsack problems (devised in works by Howgrave-Graham and Joux and by Becker, Coron and Joux) and dissection algorithms (published by Dinur, Dunkelman, Keller and Shamir). It then builds on these techniques to develop new GBP algorithms
    corecore