65 research outputs found

    Quadratic compact knapsack public-key cryptosystem

    Get PDF
    AbstractKnapsack-type cryptosystems were among the first public-key cryptographic schemes to be invented. Their NP-completeness nature and the high speed in encryption/decryption made them very attractive. However, these cryptosystems were shown to be vulnerable to the low-density subset-sum attacks or some key-recovery attacks. In this paper, additive knapsack-type public-key cryptography is reconsidered. We propose a knapsack-type public-key cryptosystem by introducing an easy quadratic compact knapsack problem. The system uses the Chinese remainder theorem to disguise the easy knapsack sequence. The encryption function of the system is nonlinear about the message vector. Under the relinearization attack model, the system enjoys a high density. We show that the knapsack cryptosystem is secure against the low-density subset-sum attacks by observing that the underlying compact knapsack problem has exponentially many solutions. It is shown that the proposed cryptosystem is also secure against some brute-force attacks and some known key-recovery attacks including the simultaneous Diophantine approximation attack and the orthogonal lattice attack

    Safer parameters for the Chor–Rivest cryptosystem

    Get PDF
    AbstractVaudenay’s cryptanalysis against Chor–Rivest cryptosystem is applicable when the parameters, p and h, originally proposed by the authors are used. Nevertheless, if p and h are both prime integers, then Vaudenay’s attack is not applicable. In this work, a choice of these parameters resistant to the existing cryptanalytic attacks, is presented. The parameters are determined in a suitable range guaranteeing its security and the computational feasibility of implementation. Regrettably, the obtained parameters are scarce in practice

    Tensor-based trapdoors for CVP and their application to public key cryptography

    Get PDF
    We propose two trapdoors for the Closest-Vector-Problem in lattices (CVP) related to the lattice tensor product. Using these trapdoors we set up a lattice-based cryptosystem which resembles to the McEliece scheme

    Cryptographic Hash Functions in Groups and Provable Properties

    Get PDF
    We consider several "provably secure" hash functions that compute simple sums in a well chosen group (G,*). Security properties of such functions provably translate in a natural way to computational problems in G that are simple to define and possibly also hard to solve. Given k disjoint lists Li of group elements, the k-sum problem asks for gi ∊ Li such that g1 * g2 *...* gk = 1G. Hardness of the problem in the respective groups follows from some "standard" assumptions used in public-key cryptology such as hardness of integer factoring, discrete logarithms, lattice reduction and syndrome decoding. We point out evidence that the k-sum problem may even be harder than the above problems. Two hash functions based on the group k-sum problem, SWIFFTX and FSB, were submitted to NIST as candidates for the future SHA-3 standard. Both submissions were supported by some sort of a security proof. We show that the assessment of security levels provided in the proposals is not related to the proofs included. The main claims on security are supported exclusively by considerations about available attacks. By introducing "second-order" bounds on bounds on security, we expose the limits of such an approach to provable security. A problem with the way security is quantified does not necessarily mean a problem with security itself. Although FSB does have a history of failures, recent versions of the two above functions have resisted cryptanalytic efforts well. This evidence, as well as the several connections to more standard problems, suggests that the k-sum problem in some groups may be considered hard on its own, and possibly lead to provable bounds on security. Complexity of the non-trivial tree algorithm is becoming a standard tool for measuring the associated hardness. We propose modifications to the multiplicative Very Smooth Hash and derive security from multiplicative k-sums in contrast to the original reductions that related to factoring or discrete logarithms. Although the original reductions remain valid, we measure security in a new, more aggressive way. This allows us to relax the parameters and hash faster. We obtain a function that is only three times slower compared to SHA-256 and is estimated to offer at least equivalent collision resistance. The speed can be doubled by the use of a special modulus, such a modified function is supported exclusively by the hardness of multiplicative k-sums modulo a power of two. Our efforts culminate in a new multiplicative k-sum function in finite fields that further generalizes the design of Very Smooth Hash. In contrast to the previous variants, the memory requirements of the new function are negligible. The fastest instance of the function expected to offer 128-bit collision resistance runs at 24 cycles per byte on an Intel Core i7 processor and approaches the 17.4 figure of SHA-256. The new functions proposed in this thesis do not provably achieve a usual security property such as preimage or collision resistance from a well-established assumption. They do however enjoy unconditional provable separation of inputs that collide. Changes in input that are small with respect to a well defined measure never lead to identical output in the compression function

    Quantum Lattice Sieving

    Full text link
    Lattices are very important objects in the effort to construct cryptographic primitives that are secure against quantum attacks. A central problem in the study of lattices is that of finding the shortest non-zero vector in the lattice. Asymptotically, sieving is the best known technique for solving the shortest vector problem, however, sieving requires memory exponential in the dimension of the lattice. As a consequence, enumeration algorithms are often used in place of sieving due to their linear memory complexity, despite their super-exponential runtime. In this work, we present a heuristic quantum sieving algorithm that has memory complexity polynomial in the size of the length of the sampled vectors at the initial step of the sieve. In other words, unlike most sieving algorithms, the memory complexity of our algorithm does not depend on the number of sampled vectors at the initial step of the sieve.Comment: A reviewer pointed out an error in the amplitude amplification step in the analysis of Theorem 6. While we believe this error can be resolved, we are not sure how to do it at the moment and are taking down this submissio

    Cryptosystems and LLL

    Get PDF
    Since the late 70’s, several public key cryptographic algorithms have been proposed. DiïŹƒe and Hellman ïŹrst came with this concept in 1976. Since that time, several other public key cryptosystems were invented, such as the well known RSA, ElGamal or Rabin cryptosystems. Roughly, the scope of these algorithms is to allow the secure exchange of a secret key that will later on be used to encrypt a larger amount of data. All these algorithms share one particularity, namely that their security rely on some mathematical problem which is supposed to be hard (computationally speaking) to solve. For example, it is well known that the ability to factorize easily the product of two large primes without any indication about the primes, would lead to break the RSA cryptosystem. Since those early days, most of the assymetric encryption algorithms that have been proposed relied on the same hard mathematical problems. Yet, it is well accepted that we should not put al l the cryptographic eggs in one basket. This is the reason why Goldreich, Goldwasser, and Halevi proposed in 1997 (exactly 10 years after RSA) a new public-key cryptosystem which security was based on lattice reduction problems. This cryptographic schemes is known as the GGH cryptosystem. Unfortunately, only two years later, Nguyen proposed an attack against GGH which proved that in practice, GGH would not provide the security it originally claimed to have. The attack involved a so called lattice reduction algorithm, known as LLL. The aim of this work is to provide the necessary background to understand the GGH cryptosystem and the attack that goes with it. The basis reduction algorithm LLL has many other applications in cryptography. As an example, we will review a very recent attack against the GNU Privacy Guard software, which is a widely used free implementation of Zimmermann’s famous software. The necessary background about lattices, LLL, . . . will be recalled in sections 2 and 3. Section 4 will review the GGH cryptosystem and Nguyen’s attack against it. Finally, Section 5 will show how lattice reduction techniques can break the ElGamal digital signature scheme implementation in GPGv1.2.3

    Accelerating lattice reduction with FPGAs

    Get PDF
    International audienceWe describe an FPGA accelerator for the Kannan­–Fincke­–Pohst enumeration algorithm (KFP) solving the Shortest Lattice Vector Problem (SVP). This is the first FPGA implementation of KFP specifically targeting cryptographically relevant dimensions. In order to optimize this implementation, we theoretically and experimentally study several facets of KFP, including its efficient parallelization and its underlying arithmetic. Our FPGA accelerator can be used for both solving stand-alone instances of SVP (within a hybrid CPU­–FPGA compound) or myriads of smaller dimensional SVP instances arising in a BKZ-type algorithm. For devices of comparable costs, our FPGA implementation is faster than a multi-core CPU implementation by a factor around 2.12
    • 

    corecore