6 research outputs found

    A Greedy Global Framework for LLL

    Get PDF
    LLL-style lattice reduction algorithms iteratively employ size reduction and reordering on ordered basis vectors to find progressively shorter, more orthogonal vectors. These algorithms work with a designated measure of basis quality and perform reordering by inserting a vector in an earlier position depending on the basis quality before and after reordering. DeepLLL was introduced alongside the BKZ reduction algorithm, however the latter has emerged as the state-of-the-art and has therefore received greater attention. We first show that LLL-style algorithms iteratively improve a basis quality measure; specifically that DeepLLL improves a sublattice measure based on the generalised Lovász condition. We then introduce a new generic framework for lattice reduction algorithms, working with some quality measure X. We instantiate our framework with two quality measures - basis potential (Pot) and squared sum (SS) - both of which have corresponding DeepLLL algorithms. We prove polynomial runtimes for our X-GGLLL algorithms and guarantee their output quality. We run two types of experiments (implementations provided publicly) to compare performances of LLL, X-DeepLLL, X-GGLLL; with multi-precision arithmetic using overestimated floating point precision for standalone comparison with no preprocessing, and with standard datatypes using LLL-preprocessed inputs. In preprocessed comparison, we also compare with BKZ. In standalone comparison, our GGLLL algorithms produce better quality bases whilst being much faster than the corresponding DeepLLL versions. The runtime of SS-GGLLL is only second to LLL in our standalone comparison. SS-GGLLL is significantly faster than the FPLLL implementation of BKZ-12 at all dimensions and outputs better quality bases dimensions 100 onward

    TOPICS IN COMPUTATIONAL NUMBER THEORY AND CRYPTANALYSIS - On Simultaneous Chinese Remaindering, Primes, the MiNTRU Assumption, and Functional Encryption

    Get PDF
    This thesis reports on four independent projects that lie in the intersection of mathematics, computer science, and cryptology: Simultaneous Chinese Remaindering: The classical Chinese Remainder Problem asks to find all integer solutions to a given system of congruences where each congruence is defined by one modulus and one remainder. The Simultaneous Chinese Remainder Problem is a direct generalization of its classical counterpart where for each modulus the single remainder is replaced by a non-empty set of remainders. The solutions of a Simultaneous Chinese Remainder Problem instance are completely defined by a set of minimal positive solutions, called primitive solutions, which are upper bounded by the lowest common multiple of the considered moduli. However, contrary to its classical counterpart, which has at most one primitive solution, the Simultaneous Chinese Remainder Problem may have an exponential number of primitive solutions, so that any general-purpose solving algorithm requires exponential time. Furthermore, through a direct reduction from the 3-SAT problem, we prove first that deciding whether a solution exists is NP-complete, and second that if the existence of solutions is guaranteed, then deciding whether a solution of a particular size exists is also NP-complete. Despite these discouraging results, we studied methods to find the minimal solution to Simultaneous Chinese Remainder Problem instances and we discovered some interesting statistical properties. A Conjecture On Primes In Arithmetic Progressions And Geometric Intervals: Dirichlet’s theorem on primes in arithmetic progressions states that for any positive integer q and any coprime integer a, there are infinitely many primes in the arithmetic progression a + nq (n ∈ N), however, it does not indicate where those primes can be found. Linnik’s theorem predicts that the first such prime p0 can be found in the interval [0;q^L] where L denotes an absolute and explicitly computable constant. Albeit only L = 5 has been proven, it is widely believed that L ≤ 2. We generalize Linnik’s theorem by conjecturing that for any integers q ≥ 2, 1 ≤ a ≤ q − 1 with gcd(q, a) = 1, and t ≥ 1, there exists a prime p such that p ∈ [q^t;q^(t+1)] and p ≡ a mod q. Subsequently, we prove the conjecture for all sufficiently large exponent t, we computationally verify it for all sufficiently small modulus q, and we investigate its relation to other mathematical results such as Carmichael’s totient function conjecture. On The (M)iNTRU Assumption Over Finite Rings: The inhomogeneous NTRU (iNTRU) assumption is a recent computational hardness assumption, which claims that first adding a random low norm error vector to a known gadget vector and then multiplying the result with a secret vector is sufficient to obfuscate the considered secret vector. The matrix inhomogeneous NTRU (MiNTRU) assumption essentially replaces vectors with matrices. Albeit those assumptions strongly remind the well-known learning-with-errors (LWE) assumption, their hardness has not been studied in full detail yet. We provide an elementary analysis of the corresponding decision assumptions and break them in their basis case using an elementary q-ary lattice reduction attack. Concretely, we restrict our study to vectors over finite integer rings, which leads to a problem that we call (M)iNTRU. Starting from a challenge vector, we construct a particular q-ary lattice that contains an unusually short vector whenever the challenge vector follows the (M)iNTRU distribution. Thereby, elementary lattice reduction allows us to distinguish a random challenge vector from a synthetically constructed one. A Conditional Attack Against Functional Encryption Schemes: Functional encryption emerged as an ambitious cryptographic paradigm supporting function evaluations over encrypted data revealing the result in plain. Therein, the result consists either in a valid output or a special error symbol. We develop a conditional selective chosen-plaintext attack against the indistinguishability security notion of functional encryption. Intuitively, indistinguishability in the public-key setting is based on the premise that no adversary can distinguish between the encryptions of two known plaintext messages. As functional encryption allows us to evaluate functions over encrypted messages, the adversary is restricted to evaluations resulting in the same output only. To ensure consistency with other primitives, the decryption procedure of a functional encryption scheme is allowed to fail and output an error. We observe that an adversary may exploit the special role of these errors to craft challenge messages that can be used to win the indistinguishability game. Indeed, the adversary can choose the messages such that their functional evaluation leads to the common error symbol, but their intermediate computation values differ. A formal decomposition of the underlying functionality into a mathematical function and an error trigger reveals this dichotomy. Finally, we outline the impact of this observation on multiple DDH-based inner-product functional encryption schemes when we restrict them to bounded-norm evaluations only

    On the Design and Improvement of Lattice-based Cryptosystems

    Get PDF
    Digital signatures and encryption schemes constitute arguably an integral part of cryptographic schemes with the goal to meet the security needs of present and future private and business applications. However, almost all public key cryptosystems applied in practice are put at risk due to its vulnerability to quantum attacks as a result of Shor's quantum algorithm. The magnitude of economic and social impact is tremendous inherently asking for alternatives replacing classical schemes in case large-scale quantum computers are built. Lattice-based cryptography emerged as a powerful candidate attracting lots of attention not only due to its conjectured resistance against quantum attacks, but also because of its unique security guarantee to provide worst-case hardness of average-case instances. Hence, the requirement of imposing further assumptions on the hardness of randomly chosen instances disappears, resulting in more efficient instantiations of cryptographic schemes. The best known lattice attack algorithms run in exponential time. In this thesis we contribute to a smooth transition into a world with practically efficient lattice-based cryptographic schemes. This is indeed accomplished by designing new algorithms and cryptographic schemes as well as improving existing ones. Our contributions are threefold. First, we construct new encryption schemes that fully exploit the error term in LWE instances. To this end, we introduce a novel computational problem that we call Augmented LWE (A-LWE), differing from the original LWE problem only in the way the error term is produced. In fact, we embed arbitrary data into the error term without changing the target distributions. Following this, we prove that A-LWE instances are indistinguishable from LWE samples. This allows to build powerful encryption schemes on top of the A-LWE problem that are simple in its representations and efficient in practice while encrypting huge amounts of data realizing message expansion factors close to 1. This improves, to our knowledge, upon all existing encryption schemes. Due to the versatility of the error term, we further add various security features such as CCA and RCCA security or even plug lattice-based signatures into parts of the error term, thus providing an additional mechanism to authenticate encrypted data. Based on the methodology to embed arbitrary data into the error term while keeping the target distributions, we realize a novel CDT-like discrete Gaussian sampler that beats the best known samplers such as Knuth-Yao or the standard CDT sampler in terms of running time. At run time the table size amounting to 44 elements is constant for every discrete Gaussian parameter and the total space requirements are exactly as large as for the standard CDT sampler. Further results include a very efficient inversion algorithm for ring elements in special classes of cyclotomic rings. In fact, by use of the NTT it is possible to efficiently check for invertibility and deduce a representation of the corresponding unit group. Moreover, we generalize the LWE inversion algorithm for the trapdoor candidate of Micciancio and Peikert from power of two moduli to arbitrary composed integers using a different approach. In the second part of this thesis, we present an efficient trapdoor construction for ideal lattices and an associated description of the GPV signature scheme. Furthermore, we improve the signing step using a different representation of the involved perturbation matrix leading to enhanced memory usage and running times. Subsequently, we introduce an advanced compression algorithm for GPV signatures, which previously suffered from huge signature sizes as a result of the construction or due to the requirement of the security proof. We circumvent this problem by introducing the notion of public and secret randomness for signatures. In particular, we generate the public portion of a signature from a short uniform random seed without violating the previous conditions. This concept is subsequently transferred to the multi-signer setting which increases the efficiency of the compression scheme in presence of multiple signers. Finally in this part, we propose the first lattice-based sequential aggregate signature scheme that enables a group of signers to sequentially generate an aggregate signature of reduced storage size such that the verifier is still able to check that each signer indeed signed a message. This approach is realized based on lattice-based trapdoor functions and has many application areas such as wireless sensor networks. In the final part of this thesis, we extend the theoretical foundations of lattices and propose new representations of lattice problems by use of Cauchy integrals. Considering lattice points as simple poles of some complex functions allows to operate on lattice points via Cauchy integrals and its generalizations. For instance, we can deduce for the one-dimensional and two-dimensional case simple expressions for the number of lattice points inside a domain using trigonometric or elliptic functions

    Decryption Failure Attacks on Post-Quantum Cryptography

    Get PDF
    This dissertation discusses mainly new cryptanalytical results related to issues of securely implementing the next generation of asymmetric cryptography, or Public-Key Cryptography (PKC).PKC, as it has been deployed until today, depends heavily on the integer factorization and the discrete logarithm problems.Unfortunately, it has been well-known since the mid-90s, that these mathematical problems can be solved due to Peter Shor's algorithm for quantum computers, which achieves the answers in polynomial time.The recently accelerated pace of R&D towards quantum computers, eventually of sufficient size and power to threaten cryptography, has led the crypto research community towards a major shift of focus.A project towards standardization of Post-quantum Cryptography (PQC) was launched by the US-based standardization organization, NIST. PQC is the name given to algorithms designed for running on classical hardware/software whilst being resistant to attacks from quantum computers.PQC is well suited for replacing the current asymmetric schemes.A primary motivation for the project is to guide publicly available research toward the singular goal of finding weaknesses in the proposed next generation of PKC.For public key encryption (PKE) or digital signature (DS) schemes to be considered secure they must be shown to rely heavily on well-known mathematical problems with theoretical proofs of security under established models, such as indistinguishability under chosen ciphertext attack (IND-CCA).Also, they must withstand serious attack attempts by well-renowned cryptographers both concerning theoretical security and the actual software/hardware instantiations.It is well-known that security models, such as IND-CCA, are not designed to capture the intricacies of inner-state leakages.Such leakages are named side-channels, which is currently a major topic of interest in the NIST PQC project.This dissertation focuses on two things, in general:1) how does the low but non-zero probability of decryption failures affect the cryptanalysis of these new PQC candidates?And 2) how might side-channel vulnerabilities inadvertently be introduced when going from theory to the practice of software/hardware implementations?Of main concern are PQC algorithms based on lattice theory and coding theory.The primary contributions are the discovery of novel decryption failure side-channel attacks, improvements on existing attacks, an alternative implementation to a part of a PQC scheme, and some more theoretical cryptanalytical results

    Theoretical and practical efficiency aspects in cryptography

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Digital Filtering and Processing by Transform Techniques, Volume 1 Final Report

    Get PDF
    Digital filtering and processing by transform technique
    corecore