17 research outputs found

    Prime and Prejudice:Primality Testing Under Adversarial Conditions

    Get PDF
    This work provides a systematic analysis of primality testing under adversarial conditions, where the numbers being tested for primality are not generated randomly, but instead provided by a possibly malicious party. Such a situation can arise in secure messaging protocols where a server supplies Diffie-Hellman parameters to the peers, or in a secure communications protocol like TLS where a developer can insert such a number to be able to later passively spy on client-server data. We study a broad range of cryptographic libraries and assess their performance in this adversarial setting. As examples of our findings, we are able to construct 2048-bit composites that are declared prime with probability 1/161/16 by OpenSSL\u27s primality testing in its default configuration; the advertised performance is 2802^{-80}. We can also construct 1024-bit composites that always pass the primality testing routine in GNU GMP when configured with the recommended minimum number of rounds. And, for a number of libraries (Cryptlib, LibTomCrypt, JavaScript Big Number, WolfSSL), we can construct composites that always pass the supplied primality tests. We explore the implications of these security failures in applications, focusing on the construction of malicious Diffie-Hellman parameters. We show that, unless careful primality testing is performed, an adversary can supply parameters (p,q,g)(p,q,g) which on the surface look secure, but where the discrete logarithm problem in the subgroup of order qq generated by gg is easy. We close by making recommendations for users and developers. In particular, we promote the Baillie-PSW primality test which is both efficient and conjectured to be robust even in the adversarial setting for numbers up to a few thousand bits

    Primality Tests on Commutator Curves

    Get PDF
    Das Thema dieser Dissertation sind effiziente Primzahltests. Zunächst wird die Kommutatorkurve eingeführt, die durch einen skalaren Parameter in der zweidimensionalen speziellen linearen Gruppe bestimmt wird. Nach Erforschung der Grundlagen dieser Kurve wird sie in verschiedene Pseudoprimzahltests (z.B. Fermat-Test, Solovay-Strassen-Test) eingebunden. Als wichtigster Pseudoprimzahltest ist dabei der Kommutatorkurventest zu nennen. Es wird bewiesen, dass dieser Test nach einer festen Anzahl von Probedivisionen (alle Primzahlen kleiner 80) das Ergebnis 'wahr' für eine zusammengesetzte Zahl mit einer Wahrscheinlichkeit ausgibt, die kleiner als 1/16 ist. Darüberhinaus wird bewiesen, dass der Miller-Primzahltest unter der Annahme der Korrektheit der Erweiterten Riemannschen Hypothese zur Überprüfung einer Zahl n nur noch für alle Primzahlbasen kleiner als 3/2*ln(n)^2 durchgeführt werden muss. Im Beweis des Primzahltests von G. L. Miller konnte dabei die Notwendigkeit der Erweiterten Riemannschen Hypothese auf nur noch ein Schlüssellemma eingegrenzt werden.This thesis is about efficient primality tests. First, the commutator curve which is described by one scalar parameter in the two-dimensional special linear group will be introduced. After fundamental research of of this curve, it will be included into different compositeness tests (e.g. Fermat's test, Solovay-Strassen test). The most important commutator test is the Commutator Curve Test. Besides, it will be proved that this test after a fixed number of trial divisions (all prime numbers up to 80) returns the result 'true' for a composite number with a probability less than 1/16. Moreover, it will be shown that Miller's test to check a number n only has to be carried out for all prime bases less than 3/2*ln(n)^2. This happens under the assumption that the Extended Riemann Hypothesis is true. The necessity of the Extended Riemann Hypothesis to prove the primality test of G. L. Miller can be reduced to a single key lemma

    Fooling primality tests on smartcards

    Get PDF
    We analyse whether the smartcards of the JavaCard platform correctly validate primality of domain parameters. The work is inspired by the paper Prime and prejudice: primality testing under adversarial conditions, where the authors analysed many open-source libraries and constructed pseudoprimes fooling the primality testing functions. However, in the case of smartcards, often there is no way to invoke the primality test directly, so we trigger it by replacing (EC)DSA and (EC)DH prime domain parameters by adversarial composites. Such a replacement results in vulnerability to Pohlig-Hellman style attacks, leading to private key recovery. Out of nine smartcards (produced by five major manufacturers) we tested, all but one have no primality test in parameter validation. As the JavaCard platform provides no public primality testing API, the problem cannot be fixed by an extra parameter check, %an additional check before the parameters are passed to existing (EC)DSA and (EC)DH functions, making it difficult to mitigate in already deployed smartcards

    Safety in Numbers:On the Need for Robust Diffie-Hellman Parameter Validation

    Get PDF
    We consider the problem of constructing Diffie-Hellman (DH) parameters which pass standard approaches to parameter validation but for which the Discrete Logarithm Problem (DLP) is relatively easy to solve. We consider both the finite field setting and the elliptic curve setting. For finite fields, we show how to construct DH parameters (p,q,g) for the safe prime setting in which p=2q+1 is prime, q is relatively smooth but fools random-base Miller-Rabin primality testing with some reasonable probability, and g is of order q mod p. The construction involves modifying and combining known methods for obtaining Carmichael numbers. Concretely, we provide an example with 1024-bit p which passes OpenSSL's Diffie-Hellman validation procedure with probability 2^{-24} (for versions of OpenSSL prior to 1.1.0i). Here, the largest factor of q has 121 bits, meaning that the DLP can be solved with about 2^{64} effort using the Pohlig-Hellman algorithm. We go on to explain how this parameter set can be used to mount offline dictionary attacks against PAKE protocols. In the elliptic curve case, we use an algorithm of Bröker and Stevenhagen to construct an elliptic curve E over a finite field F_p having a specified number of points n. We are able to select n of the form hq such that h is a small co-factor, q is relatively smooth but fools random-base Miller-Rabin primality testing with some reasonable probability, and E has a point of order q. Concretely, we provide example curves at the 128-bit security level with h=1, where q passes a single random-base Miller-Rabin primality test with probability 1/4 and where the elliptic curve DLP can be solved with about 2^{44} effort. Alternatively, we can pass the test with probability 1/8 and solve the elliptic curve DLP with about 2^{35.5} effort. These ECDH parameter sets lead to similar attacks on PAKE protocols relying on elliptic curves. Our work shows the importance of performing proper (EC)DH parameter validation in cryptographic implementations and/or the wisdom of relying on standardised parameter sets of known provenance.ISSN:0302-9743ISSN:1611-334

    A one-parameter quadratic-base version of the Baillie–PSW probable prime test

    No full text
    Abstract. The well-known Baillie-PSW probable prime test is a combination of a Rabin-Miller test and a “true ” (i.e., with (D/n)=−1) Lucas test. Arnault mentioned in a recent paper that no precise result is known about its probability of error. Grantham recently provided a probable prime test (RQFT) with probability of error less than 1/7710, and pointed out that the lack of counter-examples to the Baillie-PSW test indicates that the true probability of error may be much lower. In this paper we first define pseudoprimes and strong pseudoprimes to quadratic bases with one parameter: Tu = T mod (T2 − uT + 1), and define the base-counting functions: B(n)=#{u:0 ≤ u<n, nis a psp(Tu)} and SB(n)=#{u:0 ≤ u<n, nis an spsp(Tu)}. Then we give explicit formulas to compute B(n) and SB(n), and prove that, for odd composites n, B(n) <n/2 and SB(n) <n/8, and point out that these are best possible. Finally, based on one-parameter quadratic-base pseudoprimes, we provide a probable prime test, called the One-Parameter Quadratic-Base Test (OPQBT), which passed by all primes ≥ 5 andpassedbyanoddcompositen = p r1 1 pr2 2 ···prs s (p1 <p2 < ·· · <ps odd primes) with probability of error τ(n). We give explicit formulas to compute τ(n), and prove tha
    corecore