190 research outputs found
Certified lattice reduction
Quadratic form reduction and lattice reduction are fundamental tools in
computational number theory and in computer science, especially in
cryptography. The celebrated Lenstra-Lenstra-Lov\'asz reduction algorithm
(so-called LLL) has been improved in many ways through the past decades and
remains one of the central methods used for reducing integral lattice basis. In
particular, its floating-point variants-where the rational arithmetic required
by Gram-Schmidt orthogonalization is replaced by floating-point arithmetic-are
now the fastest known. However, the systematic study of the reduction theory of
real quadratic forms or, more generally, of real lattices is not widely
represented in the literature. When the problem arises, the lattice is usually
replaced by an integral approximation of (a multiple of) the original lattice,
which is then reduced. While practically useful and proven in some special
cases, this method doesn't offer any guarantee of success in general. In this
work, we present an adaptive-precision version of a generalized LLL algorithm
that covers this case in all generality. In particular, we replace
floating-point arithmetic by Interval Arithmetic to certify the behavior of the
algorithm. We conclude by giving a typical application of the result in
algebraic number theory for the reduction of ideal lattices in number fields.Comment: 23 page
Pairing the Volcano
Isogeny volcanoes are graphs whose vertices are elliptic curves and whose
edges are -isogenies. Algorithms allowing to travel on these graphs were
developed by Kohel in his thesis (1996) and later on, by Fouquet and Morain
(2001). However, up to now, no method was known, to predict, before taking a
step on the volcano, the direction of this step. Hence, in Kohel's and
Fouquet-Morain algorithms, many steps are taken before choosing the right
direction. In particular, ascending or horizontal isogenies are usually found
using a trial-and-error approach. In this paper, we propose an alternative
method that efficiently finds all points of order such that the
subgroup generated by is the kernel of an horizontal or an ascending
isogeny. In many cases, our method is faster than previous methods. This is an
extended version of a paper published in the proceedings of ANTS 2010. In
addition, we treat the case of 2-isogeny volcanoes and we derive from the group
structure of the curve and the pairing a new invariant of the endomorphism
class of an elliptic curve. Our benchmarks show that the resulting algorithm
for endomorphism ring computation is faster than Kohel's method for computing
the -adic valuation of the conductor of the endomorphism ring for small
Fully homomorphic encryption modulo Fermat numbers
In this paper, we recast state-of-the-art constructions for fully
homomorphic encryption in the simple language of arithmetic modulo
large Fermat numbers. The techniques used to construct our scheme
are quite standard in the realm of (R)LWE based
cryptosystems. However, the use of arithmetic in such a simple ring
greatly simplifies exposition of the scheme and makes its
implementation much easier.
In terms of performance, our test implementation of the proposed
scheme is slower than the current speed records but remains within a
comparable range. We hope that the detailed study of our simplified
scheme by the community can make it competitive and provide new
insights into FHE constructions at large
MPC in the head for isomorphisms and group actions
In this paper, we take inspiration from an invited talk presented at CBCrypto\u2723 to design identification protocols and signature schemes from group actions using the MPC-in-the-head paradigm. We prove the security of the given identification schemes and rely on the Fiat-Shamir transformation to turn them into signatures.
We also establish a parallel with the technique used for the MPC-in-the-head approach and the seed tree method that has been recently used in some signature and ring signatures algorithms based on group action problems
Security ranking among assumptions within the Uber assumption framework
Over the past decade bilinear maps have been used to build a large variety of cryptosystems. In parallel to new functionalities, we have also seen the emergence of many security assumptions. This leads to the general question of comparing two such assumptions. Boneh, Boyen and Goh introduced the Uber assumption as an attempt to offer a general framework for security assessment. Their idea is to propose a generic security assumption that can be specialized to suit the needs of any proof of protocols involving bilinear pairing. Even though the Uber assumption has been only stated in the bilinear setting, it can be easily restated to deal with ordinary Diffie-Hellman groups and assess other type of protocols.
In this article, we explore some particular instances of the Uber assumption; namely the n-CDH-assumption, the nth-CDH-assumption and the Q-CDH-assumption. We analyse the relationships between those assumption and more precisely from a security point of view. Our analysis does not rely on any special property of the considered group(s) and does not use the generic group model
Classical and Quantum Algorithms for Variants of Subset-Sum via Dynamic Programming
Subset-Sum is an NP-complete problem where one must decide if a multiset of n integers contains a subset whose elements sum to a target value m. The best known classical and quantum algorithms run in time O?(2^{n/2}) and O?(2^{n/3}), respectively, based on the well-known meet-in-the-middle technique. Here we introduce a novel classical dynamic-programming-based data structure with applications to Subset-Sum and a number of variants, including Equal-Sums (where one seeks two disjoint subsets with the same sum), 2-Subset-Sum (a relaxed version of Subset-Sum where each item in the input set can be used twice in the summation), and Shifted-Sums, a generalization of both of these variants, where one seeks two disjoint subsets whose sums differ by some specified value.
Given any modulus p, our data structure can be constructed in time O(np), after which queries can be made in time O(n) to the lists of subsets summing to any value modulo p. We use this data structure in combination with variable-time amplitude amplification and a new quantum pair finding algorithm, extending the quantum claw finding algorithm to the multiple solutions case, to give an O(2^{0.504n}) quantum algorithm for Shifted-Sums. This provides a notable improvement on the best known O(2^{0.773n}) classical running time established by Mucha et al. [Mucha et al., 2019]. We also study Pigeonhole Equal-Sums, a variant of Equal-Sums where the existence of a solution is guaranteed by the pigeonhole principle. For this problem we give faster classical and quantum algorithms with running time O?(2^{n/2}) and O?(2^{2n/5}), respectively
Reducing number field defining polynomials: An application to class group computations
In this paper, we describe how to compute smallest monic polynomials that define a given number field . We make use of the one-to-one correspondence between monic defining polynomials of and algebraic integers that generate . Thus, a smallest polynomial corresponds to a vector in the lattice of integers of and this vector is short in some sense. The main idea is to consider weighted coordinates for the vectors of the lattice of integers of . This allows us to find the desired polynomial by enumerating short vectors in these weighted lattices. In the context of the subexponential algorithm of Biasse and Fieker for computing class groups, this algorithm can be used as a precomputation step that speeds up the rest of the computation. It also widens the applicability of their faster conditional method -- which requires a defining polynomial of small height -- to a much larger set of number field descriptions
Algorithmic aspects of elliptic bases in finite field discrete logarithm algorithms
Elliptic bases, introduced by Couveignes and Lercier in 2009, give an
elegant way of representing finite field extensions. A natural
question which seems to have been considered independently by several
groups is to use this representation as a starting point for discrete
logarithm algorithms in small characteristic finite fields.
This idea has been recently proposed by two groups working on it, in
order to achieve provable quasi-polynomial time for discrete
logarithms in small characteristic finite fields.
In this paper, we do not try to achieve a provable algorithm but,
instead, investigate the practicality of heuristic algorithms based
on elliptic bases. Our key idea is to use a different model of the
elliptic curve used for the elliptic basis that allows for a
relatively simple adaptation of the techniques used with former
Frobenius representation algorithms.
We have not performed any record computation with this new method but
our experiments with the field \F_{3^{1345}} indicate that
switching to elliptic representations might be possible with
performances comparable to the current best practical methods
- âŠ