17 research outputs found

    Preimage Selective Trapdoor Function: How to Repair an Easy Problem

    Get PDF
    Public key cryptosystems are constructed by embedding a trapdoor into a one-way function. So, the one-wayness and the trapdoorness are vital to public key cryptography. In this paper, we propose a novel public key cryptographic primitive called preimage selective trapdoor function. This scenario allows to use exponentially many preimage to hide a plaintext even if the underlying function is not one-way. The compact knapsack problem is used to construct a probabilistic public key cryptosystem, the underlying encryption function of which is proven to be preimage selective trapdoor one-way functions under some linearization attack models. The constructive method can guarantee the noninjectivity of the underlying encryption function and the unique decipherability for ciphertexts simultaneously. It is heuristically argued that the security of the proposal cannot be compromised by a polynomial-time adversary even if the compact knapsack is easy to solve. We failed to provide any provable security results about the proposal; however, heuristic illustrations show that the proposal is secure against some known attacks including brute force attacks, linearization attacks, and key-recovery attacks. The proposal turns out to have acceptable key sizes and performs efficiently and hence is practical

    TOPICS IN COMPUTATIONAL NUMBER THEORY AND CRYPTANALYSIS - On Simultaneous Chinese Remaindering, Primes, the MiNTRU Assumption, and Functional Encryption

    Get PDF
    This thesis reports on four independent projects that lie in the intersection of mathematics, computer science, and cryptology: Simultaneous Chinese Remaindering: The classical Chinese Remainder Problem asks to find all integer solutions to a given system of congruences where each congruence is defined by one modulus and one remainder. The Simultaneous Chinese Remainder Problem is a direct generalization of its classical counterpart where for each modulus the single remainder is replaced by a non-empty set of remainders. The solutions of a Simultaneous Chinese Remainder Problem instance are completely defined by a set of minimal positive solutions, called primitive solutions, which are upper bounded by the lowest common multiple of the considered moduli. However, contrary to its classical counterpart, which has at most one primitive solution, the Simultaneous Chinese Remainder Problem may have an exponential number of primitive solutions, so that any general-purpose solving algorithm requires exponential time. Furthermore, through a direct reduction from the 3-SAT problem, we prove first that deciding whether a solution exists is NP-complete, and second that if the existence of solutions is guaranteed, then deciding whether a solution of a particular size exists is also NP-complete. Despite these discouraging results, we studied methods to find the minimal solution to Simultaneous Chinese Remainder Problem instances and we discovered some interesting statistical properties. A Conjecture On Primes In Arithmetic Progressions And Geometric Intervals: Dirichlet’s theorem on primes in arithmetic progressions states that for any positive integer q and any coprime integer a, there are infinitely many primes in the arithmetic progression a + nq (n ∈ N), however, it does not indicate where those primes can be found. Linnik’s theorem predicts that the first such prime p0 can be found in the interval [0;q^L] where L denotes an absolute and explicitly computable constant. Albeit only L = 5 has been proven, it is widely believed that L ≤ 2. We generalize Linnik’s theorem by conjecturing that for any integers q ≥ 2, 1 ≤ a ≤ q − 1 with gcd(q, a) = 1, and t ≥ 1, there exists a prime p such that p ∈ [q^t;q^(t+1)] and p ≡ a mod q. Subsequently, we prove the conjecture for all sufficiently large exponent t, we computationally verify it for all sufficiently small modulus q, and we investigate its relation to other mathematical results such as Carmichael’s totient function conjecture. On The (M)iNTRU Assumption Over Finite Rings: The inhomogeneous NTRU (iNTRU) assumption is a recent computational hardness assumption, which claims that first adding a random low norm error vector to a known gadget vector and then multiplying the result with a secret vector is sufficient to obfuscate the considered secret vector. The matrix inhomogeneous NTRU (MiNTRU) assumption essentially replaces vectors with matrices. Albeit those assumptions strongly remind the well-known learning-with-errors (LWE) assumption, their hardness has not been studied in full detail yet. We provide an elementary analysis of the corresponding decision assumptions and break them in their basis case using an elementary q-ary lattice reduction attack. Concretely, we restrict our study to vectors over finite integer rings, which leads to a problem that we call (M)iNTRU. Starting from a challenge vector, we construct a particular q-ary lattice that contains an unusually short vector whenever the challenge vector follows the (M)iNTRU distribution. Thereby, elementary lattice reduction allows us to distinguish a random challenge vector from a synthetically constructed one. A Conditional Attack Against Functional Encryption Schemes: Functional encryption emerged as an ambitious cryptographic paradigm supporting function evaluations over encrypted data revealing the result in plain. Therein, the result consists either in a valid output or a special error symbol. We develop a conditional selective chosen-plaintext attack against the indistinguishability security notion of functional encryption. Intuitively, indistinguishability in the public-key setting is based on the premise that no adversary can distinguish between the encryptions of two known plaintext messages. As functional encryption allows us to evaluate functions over encrypted messages, the adversary is restricted to evaluations resulting in the same output only. To ensure consistency with other primitives, the decryption procedure of a functional encryption scheme is allowed to fail and output an error. We observe that an adversary may exploit the special role of these errors to craft challenge messages that can be used to win the indistinguishability game. Indeed, the adversary can choose the messages such that their functional evaluation leads to the common error symbol, but their intermediate computation values differ. A formal decomposition of the underlying functionality into a mathematical function and an error trigger reveals this dichotomy. Finally, we outline the impact of this observation on multiple DDH-based inner-product functional encryption schemes when we restrict them to bounded-norm evaluations only

    Some Notes on Code-Based Cryptography

    Get PDF
    This thesis presents new cryptanalytic results in several areas of coding-based cryptography. In addition, we also investigate the possibility of using convolutional codes in code-based public-key cryptography. The first algorithm that we present is an information-set decoding algorithm, aiming towards the problem of decoding random linear codes. We apply the generalized birthday technique to information-set decoding, improving the computational complexity over previous approaches. Next, we present a new version of the McEliece public-key cryptosystem based on convolutional codes. The original construction uses Goppa codes, which is an algebraic code family admitting a well-defined code structure. In the two constructions proposed, large parts of randomly generated parity checks are used. By increasing the entropy of the generator matrix, this presumably makes structured attacks more difficult. Following this, we analyze a McEliece variant based on quasi-cylic MDPC codes. We show that when the underlying code construction has an even dimension, the system is susceptible to, what we call, a squaring attack. Our results show that the new squaring attack allows for great complexity improvements over previous attacks on this particular McEliece construction. Then, we introduce two new techniques for finding low-weight polynomial multiples. Firstly, we propose a general technique based on a reduction to the minimum-distance problem in coding, which increases the multiplicity of the low-weight codeword by extending the code. We use this algorithm to break some of the instances used by the TCHo cryptosystem. Secondly, we propose an algorithm for finding weight-4 polynomials. By using the generalized birthday technique in conjunction with increasing the multiplicity of the low-weight polynomial multiple, we obtain a much better complexity than previously known algorithms. Lastly, two new algorithms for the learning parities with noise (LPN) problem are proposed. The first one is a general algorithm, applicable to any instance of LPN. The algorithm performs favorably compared to previously known algorithms, breaking the 80-bit security of the widely used (512,1/8) instance. The second one focuses on LPN instances over a polynomial ring, when the generator polynomial is reducible. Using the algorithm, we break an 80-bit security instance of the Lapin cryptosystem

    Efficient provable-secure NTRUEncrypt over any cyclotomic field

    Get PDF
    NTRUEncrypt is a fast lattice-based cryptosystem and a probable alternative of the existing public key schemes. The existing provable-secure NTRUEncrypts are limited by the cyclotomic field it works on - the prime-power cyclotomic field. This is worth worrying, due to the subfield attack methods proposed in 20162016. Also, the module used in computation and security parameters rely heavily on the choice of plaintext space. These disadvantages restrict the applications of NTRUEncrypt. In this paper, we give a new provable secure NTRUEncrypt in standard model under canonical embedding over any cyclotomic field. We give an reduction from a simple variant of RLWE - an error distribution discretized version of RLWE, hence from worst-case ideal lattice problems, to our NTRUEncrypt. In particular, we get a union bound for reduction parameters and module for all choices of plaintext space, so that our NTRUEncrypt can send more encrypted bits in one encrypt process with higher efficiency and stronger security. Furthermore, our scheme\u27s decryption algorithm succeeds with probability 1-n^{\o(\sqrt{n\log n})} comparing with the previous works\u27 1-n^{-\o(1)}, making our scheme more practical in theory

    Implementación eficiente del teorema chino del resto

    Get PDF
    Estudio, diseño e implementación de nuevos enfoques paralelos software del Teorema Chino del Resto sobre diferentes plataformas y arquitecturas de tipo paralelo. Algunos de estos diseños típicos generan problemas conocidos de almacenamiento y gestión de la memoria en su ejecución al manejar Enteros Grandes. Este problema lleva al diseño de nuevos métodos más eficientes y refinados que vienen a solventar estas cuestiones de forma original y escalable. Los resultados obtenidos no solo mejoran de forma sustancial la implementación secuencial del mismo, sino que representan un avance en eficiencia, rendimiento y escalabilidad respecto a otras alternativas existentes actualmente

    Design and Analysis of Opaque Signatures

    Get PDF
    Digital signatures were introduced to guarantee the authenticity and integrity of the underlying messages. A digital signature scheme comprises the key generation, the signature, and the verification algorithms. The key generation algorithm creates the signing and the verifying keys, called also the signer’s private and public keys respectively. The signature algorithm, which is run by the signer, produces a signature on the input message. Finally, the verification algorithm, run by anyone who knows the signer’s public key, checks whether a purported signature on some message is valid or not. The last property, namely the universal verification of digital signatures is undesirable in situations where the signed data is commercially or personally sensitive. Therefore, mechanisms which share most properties with digital signatures except for the universal verification were invented to respond to the aforementioned need; we call such mechanisms “opaque signatures”. In this thesis, we study the signatures where the verification cannot be achieved without the cooperation of a specific entity, namely the signer in case of undeniable signatures, or the confirmer in case of confirmer signatures; we make three main contributions. We first study the relationship between two security properties important for public key encryption, namely data privacy and key privacy. Our study is motivated by the fact that opaque signatures involve always an encryption layer that ensures their opacity. The properties required for this encryption vary according to whether we want to protect the identity (i.e. the key) of the signer or hide the validity of the signature. Therefore, it would be convenient to use existing work about the encryption scheme in order to derive one notion from the other. Next, we delve into the generic constructions of confirmer signatures from basic cryptographic primitives, e.g. digital signatures, encryption, or commitment schemes. In fact, generic constructions give easy-to-understand and easy-to-prove schemes, however, this convenience is often achieved at the expense of efficiency. In this contribution, which constitutes the core of this thesis, we first analyze the already existing constructions; our study concludes that the popular generic constructions of confirmer signatures necessitate strong security assumptions on the building blocks, which impacts negatively the efficiency of the resulting signatures. Next, we show that a small change in these constructionsmakes these assumptions drop drastically, allowing as a result constructions with instantiations that compete with the dedicated realizations of these signatures. Finally, we revisit two early undeniable signatures which were proposed with a conjectural security. We disprove the claimed security of the first scheme, and we provide a fix to it in order to achieve strong security properties. Next, we upgrade the second scheme so that it supports a iii desirable feature, and we provide a formal security treatment of the new scheme: we prove that it is secure assuming new reasonable assumptions on the underlying constituents

    Shorter Lattice-Based Zero-Knowledge Proofs via One-Time Commitments

    Get PDF
    There has been a lot of recent progress in constructing efficient zero-knowledge proofs for showing knowledge of an s\vec{\mathbf{s}} with small coefficients satisfying As=t\bm{A}\vec{\mathbf{s}}=\vec{\mathbf{t}}. For typical parameters, the proof sizes have gone down from several megabytes to a bit under 5050KB (Esgin et al., Asiacrypt 2020). These are now within an order of magnitude of the sizes of lattice-based signatures, which themselves constitute proof systems which demonstrate knowledge of something weaker than the aforementioned equation. One can therefore see that this line of research is approaching optimality. In this paper, we modify a key component of these proofs, as well as apply several other tweaks, to achieve a further reduction of around 30%30\% in the proof output size. We also show that this savings propagates itself when these proofs are used in a general framework to construct more complex protocols

    Theory and Practice of Cryptography and Network Security Protocols and Technologies

    Get PDF
    In an age of explosive worldwide growth of electronic data storage and communications, effective protection of information has become a critical requirement. When used in coordination with other tools for ensuring information security, cryptography in all of its applications, including data confidentiality, data integrity, and user authentication, is a most powerful tool for protecting information. This book presents a collection of research work in the field of cryptography. It discusses some of the critical challenges that are being faced by the current computing world and also describes some mechanisms to defend against these challenges. It is a valuable source of knowledge for researchers, engineers, graduate and doctoral students working in the field of cryptography. It will also be useful for faculty members of graduate schools and universities

    Analysis of the DES and the design of the LOKI encryption scheme

    Full text link

    CRPSF and NTRU Signatures over cyclotomic fields

    Get PDF
    Classical NTRUEncrypt is one of the fastest known lattice-based encryption schemes. Its counterpart, NTRUSign, also has many advantages, such as moderate key sizes, high efficiency and potential of resisting attacks from quantum computers. However, like classical NTRUEncrypt, the security of NTRUSign is also heuristic. Whether we can relate the security of NTRUSign to the worst-case lattice problems like NTRUEncrypt is still an open problem. Our main contribution is that we propose a detailed construction of Collision Resistance Preimage Sampleable Functions ((CRPSF)) over any cyclotomic field based on NTRU. By using GPV\u27s construction, we can give a provably secure NTRU Signature scheme ((NTRUSign)), which is strongly existentially unforgeable under adaptive chosen-message attacks in the ((quantum)) random oracle model. The security of CRPSF ((NTRUSign)) is reduced to the corresponding ring small integer solution problem ((Ring-SIS)). More precisely, the security of our scheme is based on the worst-case approximate shortest independent vectors problem ((SIVPγ_\gamma)) over ideal lattices. For any fixed cyclotomic field, we give a probabilistic polynomial time ((PPT)) key generation algorithm which shows how to extend the secret key of NTRUEncrypt to the secret key of NTRUSign. This algorithm is important for constructions of many cryptographic primitives based on NTRU, for example, CRPSF, NTRUSign, identity-based encryption and identity-based signature. We also delve back into former construction of NTRUEncrypt, give a much tighter reduction from decision dual-Ring-LWE problem (where the secret is chosen form the codifferent ideal) to decision primal-Ring-LWE problem (where the secret is chosen form the ring of integers) and give a provably secure NTRUEncrypt over any cyclotomic ring. Some useful results about qq-ary lattices, regularity and uniformity of distribution of the public keys of NTRUEncrypt are also extended to more general algebraic fields
    corecore