9 research outputs found

    Polynomial-Time Amoeba Neighborhood Membership and Faster Localized Solving

    Full text link
    We derive efficient algorithms for coarse approximation of algebraic hypersurfaces, useful for estimating the distance between an input polynomial zero set and a given query point. Our methods work best on sparse polynomials of high degree (in any number of variables) but are nevertheless completely general. The underlying ideas, which we take the time to describe in an elementary way, come from tropical geometry. We thus reduce a hard algebraic problem to high-precision linear optimization, proving new upper and lower complexity estimates along the way.Comment: 15 pages, 9 figures. Submitted to a conference proceeding

    On the complexity of computing Gr\"obner bases for weighted homogeneous systems

    Get PDF
    Solving polynomial systems arising from applications is frequently made easier by the structure of the systems. Weighted homogeneity (or quasi-homogeneity) is one example of such a structure: given a system of weights W=(w_1,,w_n)W=(w\_{1},\dots,w\_{n}), WW-homogeneous polynomials are polynomials which are homogeneous w.r.t the weighted degree deg_W(X_1α_1,,X_nα_n)=w_iα_i\deg\_{W}(X\_{1}^{\alpha\_{1}},\dots,X\_{n}^{\alpha\_{n}}) = \sum w\_{i}\alpha\_{i}. Gr\"obner bases for weighted homogeneous systems can be computed by adapting existing algorithms for homogeneous systems to the weighted homogeneous case. We show that in this case, the complexity estimate for Algorithm~\F5 \left(\binom{n+\dmax-1}{\dmax}^{\omega}\right) can be divided by a factor (w_i)ω\left(\prod w\_{i} \right)^{\omega}. For zero-dimensional systems, the complexity of Algorithm~\FGLM nDωnD^{\omega} (where DD is the number of solutions of the system) can be divided by the same factor (w_i)ω\left(\prod w\_{i} \right)^{\omega}. Under genericity assumptions, for zero-dimensional weighted homogeneous systems of WW-degree (d_1,,d_n)(d\_{1},\dots,d\_{n}), these complexity estimates are polynomial in the weighted B\'ezout bound _i=1nd_i/_i=1nw_i\prod\_{i=1}^{n}d\_{i} / \prod\_{i=1}^{n}w\_{i}. Furthermore, the maximum degree reached in a run of Algorithm \F5 is bounded by the weighted Macaulay bound (d_iw_i)+w_n\sum (d\_{i}-w\_{i}) + w\_{n}, and this bound is sharp if we can order the weights so that w_n=1w\_{n}=1. For overdetermined semi-regular systems, estimates from the homogeneous case can be adapted to the weighted case. We provide some experimental results based on systems arising from a cryptography problem and from polynomial inversion problems. They show that taking advantage of the weighted homogeneous structure yields substantial speed-ups, and allows us to solve systems which were otherwise out of reach

    Sub-cubic Change of Ordering for Gröner Basis: A Probabilistic Approach

    Get PDF
    International audienceThe usual algorithm to solve polynomial systems using Gröbner bases consists of two steps: first computing the DRL Gröbner basis using the F5 algorithm then computing the LEX Gröbner basis using a change of ordering algorithm. When the Bézout bound is reached, the bottleneck of the total solving process is the change of ordering step. For 20 years, thanks to the FGLM algorithm the complexity of change of ordering is known to be cubic in the number of solutions of the system to solve. We show that, in the generic case or up to a generic linear change of variables, the multiplicative structure of the quotient ring can be computed with no arithmetic operation. Moreover, given this multiplicative structure we propose a change of ordering algorithm for Shape Position ideals whose complexity is polynomial in the number of solutions with exponent ω where 2 ≤ ω < 2.3727 is the exponent in the complexity of multiplying two dense matrices. As a consequence, we propose a new Las Vegas algorithm for solving polynomial systems with a finite number of solutions by using Gröbner basis for which the change of ordering step has a sub-cubic (i.e. with exponent ω) complexity and whose total complexity is dominated by the complexity of the F5 algorithm. In practice we obtain significant speedups for various polynomial systems by a factor up to 1500 for specific cases and we are now able to tackle some instances that were intractable

    The Point Decomposition Problem over Hyperelliptic Curves: toward efficient computations of Discrete Logarithms in even characteristic

    Get PDF
    International audienceComputing discrete logarithms is generically a difficult problem. For divisor class groups of curves defined over extension fields, a variant of the Index-Calculus called Decomposition attack is used, and it can be faster than generic approaches. In this situation, collecting the relations is done by solving multiple instances of the Point m-Decomposition Problem (PDPm_m). An instance of this problem can be modelled as a zero-dimensional polynomial system. Solving is done with Gröbner bases algorithms, where the number of solutions of the system is a good indicator for the time complexity of the solving process. For systems arising from a PDPm_m context, this number grows exponentially fast with the extension degree. To achieve an efficient harvesting, this number must be reduced as much as as possible. Extending the elliptic case, we introduce a notion of Summation Ideals to describe PDP m instances over higher genus curves, and compare to Nagao's general approach to PDPm_m solving. In even characteristic we obtain reductions of the number of solutions for both approaches, depending on the curve's equation. In the best cases, for a hyperelliptic curve of genus gg, we can divide the number of solutions by 2(n1)(g+1)2^{(n−1)(g+1)}. For instance, for a type II genus 2 curve defined over F293\mathbb{F}_{2^{93}} whose divisor class group has cardinality a near-prime 184 bits integer, the number of solutions is reduced from 4096 to 64. This is enough to build the matrix of relations in around 7 days with 8000 cores using a dedicated implementation

    Using symmetries in the index calculus for elliptic curves discrete logarithm

    Get PDF
    ABSTRACT. In 2004, an algorithm is introduced to solve the DLP for elliptic curves defined over a non prime finite field Fqn. One of the main steps of this algorithm requires decomposing points of the curve E(Fqn) with respect to a factor base, this problem is denoted PDP. In this paper, we will apply this algorithm to the case of Edwards curves, the well-known family of elliptic curves that allow faster arithmetic as shown by Bernstein and Lange. More precisely, we show how to take advantage of some symmetries of twisted Edwards and twisted Jacobi intersections curves to gain an exponential factor 2ω(n−1) to solve the corresponding PDP where ω is the exponent in the complexity of multiplying two dense matrices. Practical experiments supporting the theoretical result are also given. For instance, the complexity of solving the ECDLP for twisted Edwards curves defined over Fq5, with q ≈ 264, is supposed to be ∼ 2160 operations in E(Fq5) using generic algorithms compared to 2130 operations (multiplications of two 32-bits words) with our method. For these parameters the PDP is intractable with the original algorithm. The main tool to achieve these results relies on the use of the symmetries and the quasi-homogeneous structure induced by these symmetries during the polynomial system solving step. Also, we use a recent work on a new algorithm for the change of ordering of Gröbner basis which provides a better heuristic complexity of the total solving process. 1

    Optimization and Guess-then-Solve Attacks in Cryptanalysis

    Get PDF
    In this thesis we study two major topics in cryptanalysis and optimization: software algebraic cryptanalysis and elliptic curve optimizations in cryptanalysis. The idea of algebraic cryptanalysis is to model a cipher by a Multivariate Quadratic (MQ) equation system. Solving MQ is an NP-hard problem. However, NP-hard problems have a point of phase transition where the problems become easy to solve. This thesis explores different optimizations to make solving algebraic cryptanalysis problems easier. We first worked on guessing a well-chosen number of key bits, a specific optimization problem leading to guess-then-solve attacks on GOST cipher. In addition to attacks, we propose two new security metrics of contradiction immunity and SAT immunity applicable to any cipher. These optimizations play a pivotal role in recent highly competitive results on full GOST. This and another cipher Simon, which we cryptanalyzed were submitted to ISO to become a global encryption standard which is the reason why we study the security of these ciphers in a lot of detail. Another optimization direction is to use well-selected data in conjunction with Plaintext/Ciphertext pairs following a truncated differential property. These allow to supplement an algebraic attack with extra equations and reduce solving time. This was a key innovation in our algebraic cryptanalysis work on NSA block cipher Simon and we could break up to 10 rounds of Simon64/128. The second major direction in our work is to inspect, analyse and predict the behaviour of ElimLin attack the complexity of which is very poorly understood, at a level of detail never seen before. Our aim is to extrapolate and discover the limits of such attacks, and go beyond with several types of concrete improvement. Finally, we have studied some optimization problems in elliptic curves which also deal with polynomial arithmetic over finite fields. We have studied existing implementations of the secp256k1 elliptic curve which is used in many popular cryptocurrency systems such as Bitcoin and we introduce an optimized attack on Bitcoin brain wallets and improved the state of art attack by 2.5 times
    corecore