14 research outputs found
Computing the endomorphism ring of an ordinary elliptic curve over a finite field
We present two algorithms to compute the endomorphism ring of an ordinary
elliptic curve E defined over a finite field F_q. Under suitable heuristic
assumptions, both have subexponential complexity. We bound the complexity of
the first algorithm in terms of log q, while our bound for the second algorithm
depends primarily on log |D_E|, where D_E is the discriminant of the order
isomorphic to End(E). As a byproduct, our method yields a short certificate
that may be used to verify that the endomorphism ring is as claimed.Comment: 16 pages (minor edits
The complexity of class polynomial computation via floating point approximations
We analyse the complexity of computing class polynomials, that are an
important ingredient for CM constructions of elliptic curves, via complex
floating point approximations of their roots. The heart of the algorithm is the
evaluation of modular functions in several arguments. The fastest one of the
presented approaches uses a technique devised by Dupont to evaluate modular
functions by Newton iterations on an expression involving the
arithmetic-geometric mean. It runs in time for any , where
is the CM discriminant and is the degree of the class polynomial.
Another fast algorithm uses multipoint evaluation techniques known from
symbolic computation; its asymptotic complexity is worse by a factor of . Up to logarithmic factors, this running time matches the size of the
constructed polynomials. The estimate also relies on a new result concerning
the complexity of enumerating the class group of an imaginary-quadratic order
and on a rigorously proven upper bound for the height of class polynomials
A low-memory algorithm for finding short product representations in finite groups
We describe a space-efficient algorithm for solving a generalization of the
subset sum problem in a finite group G, using a Pollard-rho approach. Given an
element z and a sequence of elements S, our algorithm attempts to find a
subsequence of S whose product in G is equal to z. For a random sequence S of
length d log_2 n, where n=#G and d >= 2 is a constant, we find that its
expected running time is O(sqrt(n) log n) group operations (we give a rigorous
proof for d > 4), and it only needs to store O(1) group elements. We consider
applications to class groups of imaginary quadratic fields, and to finding
isogenies between elliptic curves over a finite field.Comment: 12 page
Accelerating the CM method
Given a prime q and a negative discriminant D, the CM method constructs an
elliptic curve E/\Fq by obtaining a root of the Hilbert class polynomial H_D(X)
modulo q. We consider an approach based on a decomposition of the ring class
field defined by H_D, which we adapt to a CRT setting. This yields two
algorithms, each of which obtains a root of H_D mod q without necessarily
computing any of its coefficients. Heuristically, our approach uses
asymptotically less time and space than the standard CM method for almost all
D. Under the GRH, and reasonable assumptions about the size of log q relative
to |D|, we achieve a space complexity of O((m+n)log q) bits, where mn=h(D),
which may be as small as O(|D|^(1/4)log q). The practical efficiency of the
algorithms is demonstrated using |D| > 10^16 and q ~ 2^256, and also |D| >
10^15 and q ~ 2^33220. These examples are both an order of magnitude larger
than the best previous results obtained with the CM method.Comment: 36 pages, minor edits, to appear in the LMS Journal of Computation
and Mathematic
A Faster Algorithm for Two-Variable Integer Programming
We show that a 2-variable integer program, defined by constraints involving coefficients with at most bits can be solved with arithmetic operations on rational numbers of size~. This result closes the gap between the running time of two-variable integer programming with the sum of the running times of the Euclidean algorithm on -bit integers and the problem of checking feasibility of an integer point for ~constraints
Linearly Homomorphic Encryption from DDH
We design a linearly homomorphic encryption scheme whose security relies on the hardness of the decisional Diffie-Hellman problem. Our approach requires some special features of the underlying group. In particular, its order is unknown and it contains a subgroup in which the discrete logarithm problem is tractable. Therefore, our instantiation holds in the class group of a non maximal order of an imaginary quadratic field. Its algebraic structure makes it possible to obtain such a linearly homomorphic scheme whose message space is the whole set of integers modulo a prime p and which supports an unbounded number of additions modulo p from the ciphertexts. A notable difference with previous works is that, for the first time, the security does not depend on the hardness of the factorization of integers. As a consequence, under some conditions, the prime p can be scaled to fit the application needs
Terminating BKZ
Strong lattice reduction is the key element for most attacks against lattice-based cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient trade-offs. Among them, the BKZ algorithm introduced by Schnorr and Euchner [FCT\u2791] seems to achieve the best time/quality compromise in practice. However, no reasonable complexity upper bound is known for BKZ, and Gama and Nguyen [Eurocrypt\u2708] observed experimentally that its practical runtime seems to grow exponentially with the lattice dimension.
In this work, we show that BKZ can be terminated long before its completion, while still providing bases of excellent quality. More precisely, we show that if given as inputs a basis of a lattice L and a block-size , and if terminated after calls to a -dimensional HKZ-reduction (or SVP) subroutine, then BKZ returns a basis whose first vector has norm , where~ is the maximum of Hermite\u27s constants in dimensions . To obtain this result, we develop a completely new elementary technique based on discrete-time affine dynamical systems, which could lead to the design of improved lattice reduction algorithms
On the hardness of the shortest vector problem
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 77-84).An n-dimensional lattice is the set of all integral linear combinations of n linearly independent vectors in Rm. One of the most studied algorithmic problems on lattices is the shortest vector problem (SVP): given a lattice, find the shortest non-zero vector in it. We prove that the shortest vector problem is NP-hard (for randomized reductions) to approximate within some constant factor greater than 1 in any 1, norm (p >\=1). In particular, we prove the NP-hardness of approximating SVP in the Euclidean norm 12 within any factor less than [square root of]2. The same NP-hardness results hold for deterministic non-uniform reductions. A deterministic uniform reduction is also given under a reasonable number theoretic conjecture concerning the distribution of smooth numbers. In proving the NP-hardness of SVP we develop a number of technical tools that might be of independent interest. In particular, a lattice packing is constructed with the property that the number of unit spheres contained in an n-dimensional ball of radius greater than 1 + [square root of] 2 grows exponentially in n, and a new constructive version of Sauer's lemma (a combinatorial result somehow related to the notion of VC-dimension) is presented, considerably simplifying all previously known constructions.by Daniele Micciancio.Ph.D
Evaluating Large Degree Isogenies between Elliptic Curves
An isogeny between elliptic curves is an algebraic morphism which is a group homomorphism. Many applications in cryptography require evaluating large degree isogenies between elliptic curves efficiently. For ordinary curves of the same endomorphism ring, the previous fastest algorithm known has a worst case running time which is exponential in the length of the input. In this thesis we solve this problem in subexponential time under reasonable heuristics. We give two versions of our algorithm, a slower version assuming GRH and a faster version assuming stronger heuristics. Our approach is based on factoring the ideal corresponding to the kernel of the isogeny, modulo principal ideals, into a product of smaller prime ideals for which the isogenies can be computed directly. Combined with previous work of Bostan et al., our algorithm yields equations for large degree isogenies in quasi-optimal time given only the starting curve and the kernel