138 research outputs found

    The complexity of class polynomial computation via floating point approximations

    Get PDF
    We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest one of the presented approaches uses a technique devised by Dupont to evaluate modular functions by Newton iterations on an expression involving the arithmetic-geometric mean. It runs in time O(Dlog5DloglogD)=O(D1+ϵ)=O(h2+ϵ)O (|D| \log^5 |D| \log \log |D|) = O (|D|^{1 + \epsilon}) = O (h^{2 + \epsilon}) for any ϵ>0\epsilon > 0, where DD is the CM discriminant and hh is the degree of the class polynomial. Another fast algorithm uses multipoint evaluation techniques known from symbolic computation; its asymptotic complexity is worse by a factor of logD\log |D|. Up to logarithmic factors, this running time matches the size of the constructed polynomials. The estimate also relies on a new result concerning the complexity of enumerating the class group of an imaginary-quadratic order and on a rigorously proven upper bound for the height of class polynomials

    A New Algorithm for Computing the Actions of Trigonometric and Hyperbolic Matrix Functions

    Full text link
    A new algorithm is derived for computing the actions f(tA)Bf(tA)B and f(tA1/2)Bf(tA^{1/2})B, where ff is cosine, sinc, sine, hyperbolic cosine, hyperbolic sinc, or hyperbolic sine function. AA is an n×nn\times n matrix and BB is n×n0n\times n_0 with n0nn_0 \ll n. A1/2A^{1/2} denotes any matrix square root of AA and it is never required to be computed. The algorithm offers six independent output options given tt, AA, BB, and a tolerance. For each option, actions of a pair of trigonometric or hyperbolic matrix functions are simultaneously computed. The algorithm scales the matrix AA down by a positive integer ss, approximates f(s1tA)Bf(s^{-1}tA)B by a truncated Taylor series, and finally uses the recurrences of the Chebyshev polynomials of the first and second kind to recover f(tA)Bf(tA)B. The selection of the scaling parameter and the degree of Taylor polynomial are based on a forward error analysis and a sequence of the form Ak1/k\|A^k\|^{1/k} in such a way the overall computational cost of the algorithm is optimized. Shifting is used where applicable as a preprocessing step to reduce the scaling parameter. The algorithm works for any matrix AA and its computational cost is dominated by the formation of products of AA with n×n0n\times n_0 matrices that could take advantage of the implementation of level-3 BLAS. Our numerical experiments show that the new algorithm behaves in a forward stable fashion and in most problems outperforms the existing algorithms in terms of CPU time, computational cost, and accuracy.Comment: 4 figures, 16 page

    Generalised Mersenne Numbers Revisited

    Get PDF
    Generalised Mersenne Numbers (GMNs) were defined by Solinas in 1999 and feature in the NIST (FIPS 186-2) and SECG standards for use in elliptic curve cryptography. Their form is such that modular reduction is extremely efficient, thus making them an attractive choice for modular multiplication implementation. However, the issue of residue multiplication efficiency seems to have been overlooked. Asymptotically, using a cyclic rather than a linear convolution, residue multiplication modulo a Mersenne number is twice as fast as integer multiplication; this property does not hold for prime GMNs, unless they are of Mersenne's form. In this work we exploit an alternative generalisation of Mersenne numbers for which an analogue of the above property --- and hence the same efficiency ratio --- holds, even at bitlengths for which schoolbook multiplication is optimal, while also maintaining very efficient reduction. Moreover, our proposed primes are abundant at any bitlength, whereas GMNs are extremely rare. Our multiplication and reduction algorithms can also be easily parallelised, making our arithmetic particularly suitable for hardware implementation. Furthermore, the field representation we propose also naturally protects against side-channel attacks, including timing attacks, simple power analysis and differential power analysis, which is essential in many cryptographic scenarios, in constrast to GMNs.Comment: 32 pages. Accepted to Mathematics of Computatio

    Random Matrices and the Convergence of Partition Function Zeros in Finite Density QCD

    Get PDF
    We apply the Glasgow method for lattice QCD at finite chemical potential to a schematic random matrix model (RMM). In this method the zeros of the partition function are obtained by averaging the coefficients of its expansion in powers of the chemical potential. In this paper we investigate the phase structure by means of Glasgow averaging and demonstrate that the method converges to the correct analytically known result. We conclude that the statistics needed for complete convergence grows exponentially with the size of the system, in our case, the dimension of the Dirac matrix. The use of an unquenched ensemble at μ=0\mu=0 does not give an improvement over a quenched ensemble. We elucidate the phenomenon of a faster convergence of certain zeros of the partition function. The imprecision affecting the coefficients of the polynomial in the chemical potential can be interpeted as the appearance of a spurious phase. This phase dominates in the regions where the exact partition function is exponentially small, introducing additional phase boundaries, and hiding part of the true ones. The zeros along the surviving parts of the true boundaries remain unaffected.Comment: 17 pages, 14 figures, typos correcte
    corecore