138 research outputs found
The complexity of class polynomial computation via floating point approximations
We analyse the complexity of computing class polynomials, that are an
important ingredient for CM constructions of elliptic curves, via complex
floating point approximations of their roots. The heart of the algorithm is the
evaluation of modular functions in several arguments. The fastest one of the
presented approaches uses a technique devised by Dupont to evaluate modular
functions by Newton iterations on an expression involving the
arithmetic-geometric mean. It runs in time for any , where
is the CM discriminant and is the degree of the class polynomial.
Another fast algorithm uses multipoint evaluation techniques known from
symbolic computation; its asymptotic complexity is worse by a factor of . Up to logarithmic factors, this running time matches the size of the
constructed polynomials. The estimate also relies on a new result concerning
the complexity of enumerating the class group of an imaginary-quadratic order
and on a rigorously proven upper bound for the height of class polynomials
A New Algorithm for Computing the Actions of Trigonometric and Hyperbolic Matrix Functions
A new algorithm is derived for computing the actions and
, where is cosine, sinc, sine, hyperbolic cosine, hyperbolic
sinc, or hyperbolic sine function. is an matrix and is
with . denotes any matrix square root of
and it is never required to be computed. The algorithm offers six independent
output options given , , , and a tolerance. For each option, actions
of a pair of trigonometric or hyperbolic matrix functions are simultaneously
computed. The algorithm scales the matrix down by a positive integer ,
approximates by a truncated Taylor series, and finally uses the
recurrences of the Chebyshev polynomials of the first and second kind to
recover . The selection of the scaling parameter and the degree of
Taylor polynomial are based on a forward error analysis and a sequence of the
form in such a way the overall computational cost of the
algorithm is optimized. Shifting is used where applicable as a preprocessing
step to reduce the scaling parameter. The algorithm works for any matrix
and its computational cost is dominated by the formation of products of
with matrices that could take advantage of the implementation of
level-3 BLAS. Our numerical experiments show that the new algorithm behaves in
a forward stable fashion and in most problems outperforms the existing
algorithms in terms of CPU time, computational cost, and accuracy.Comment: 4 figures, 16 page
Generalised Mersenne Numbers Revisited
Generalised Mersenne Numbers (GMNs) were defined by Solinas in 1999 and
feature in the NIST (FIPS 186-2) and SECG standards for use in elliptic curve
cryptography. Their form is such that modular reduction is extremely efficient,
thus making them an attractive choice for modular multiplication
implementation. However, the issue of residue multiplication efficiency seems
to have been overlooked. Asymptotically, using a cyclic rather than a linear
convolution, residue multiplication modulo a Mersenne number is twice as fast
as integer multiplication; this property does not hold for prime GMNs, unless
they are of Mersenne's form. In this work we exploit an alternative
generalisation of Mersenne numbers for which an analogue of the above property
--- and hence the same efficiency ratio --- holds, even at bitlengths for which
schoolbook multiplication is optimal, while also maintaining very efficient
reduction. Moreover, our proposed primes are abundant at any bitlength, whereas
GMNs are extremely rare. Our multiplication and reduction algorithms can also
be easily parallelised, making our arithmetic particularly suitable for
hardware implementation. Furthermore, the field representation we propose also
naturally protects against side-channel attacks, including timing attacks,
simple power analysis and differential power analysis, which is essential in
many cryptographic scenarios, in constrast to GMNs.Comment: 32 pages. Accepted to Mathematics of Computatio
Random Matrices and the Convergence of Partition Function Zeros in Finite Density QCD
We apply the Glasgow method for lattice QCD at finite chemical potential to a
schematic random matrix model (RMM). In this method the zeros of the partition
function are obtained by averaging the coefficients of its expansion in powers
of the chemical potential. In this paper we investigate the phase structure by
means of Glasgow averaging and demonstrate that the method converges to the
correct analytically known result. We conclude that the statistics needed for
complete convergence grows exponentially with the size of the system, in our
case, the dimension of the Dirac matrix. The use of an unquenched ensemble at
does not give an improvement over a quenched ensemble.
We elucidate the phenomenon of a faster convergence of certain zeros of the
partition function. The imprecision affecting the coefficients of the
polynomial in the chemical potential can be interpeted as the appearance of a
spurious phase. This phase dominates in the regions where the exact partition
function is exponentially small, introducing additional phase boundaries, and
hiding part of the true ones. The zeros along the surviving parts of the true
boundaries remain unaffected.Comment: 17 pages, 14 figures, typos correcte
- …