47 research outputs found

    Factoring Polynomials and Groebner Bases

    Get PDF
    Factoring polynomials is a central problem in computational algebra and number theory and is a basic routine in most computer algebra systems (e.g. Maple, Mathematica, Magma, etc). It has been extensively studied in the last few decades by many mathematicians and computer scientists. The main approaches include Berlekamp\u27s method (1967) based on the kernel of Frobenius map, Niederreiter\u27s method (1993) via an ordinary differential equation, Zassenhaus\u27s modular approach (1969), Lenstra, Lenstra and Lovasz\u27s lattice reduction (1982), and Gao\u27s method via a partial differential equation (2003). These methods and their recent improvements due to van Hoeij (2002) and Lecerf et al (2006--2007) provide efficient algorithms that are widely used in practice today. This thesis studies two issues on polynomial factorization. One is to improve the efficiency of modular approach for factoring bivariate polynomials over finite fields. The usual modular approach first solves a modular linear equation (from Berlekamp\u27s equation or Niederreiter\u27s differential equation), then performs Hensel lifting of modular factors, and finally finds right combinations. An alternative method is presented in this thesis that performs Hensel lifting at the linear algebra stage instead of lifting modular factors. In this way, there is no need to find the right combinations of modular factors, and it finds instead the right linear space from which the irreducible factors can be computed via gcd. The main advantage of this method is that extra solutions can be eliminated at the early stage of computation, so improving on previous Hensel lifting methods. Another issue is about whether random numbers are essential in designing efficient algorithms for factoring polynomials. Although polynomials can be quickly factored by randomized polynomial time algorithms in practice, it is still an open problem whether there exists any deterministic polynomial time algorithm, even if generalized Riemann hypothesis (GRH) is assumed. The deterministic complexity of factoring polynomials is studied here from a different point of view that is more geometric and combinatorial in nature. Tools used include Gr\u27{o}bner basis structure theory and graphs, with connections to combinatorial designs. It is shown how to compute deterministically new Gr\u27{o}bner bases from given G\u27{o}bner bases when new polynomials are added, with running time polynomial in the degree of the original ideals. Also, a new upper bound is given on the number of ring extensions needed for finding proper factors, improving on previous results of Evdokimov (1994) and Ivanyos, Karpinski and Saxena (2008)

    Symbolic-Numeric Tools for Analytic Combinatorics in Several Variables

    Full text link
    Analytic combinatorics studies the asymptotic behaviour of sequences through the analytic properties of their generating functions. This article provides effective algorithms required for the study of analytic combinatorics in several variables, together with their complexity analyses. Given a multivariate rational function we show how to compute its smooth isolated critical points, with respect to a polynomial map encoding asymptotic behaviour, in complexity singly exponential in the degree of its denominator. We introduce a numerical Kronecker representation for solutions of polynomial systems with rational coefficients and show that it can be used to decide several properties (0 coordinate, equal coordinates, sign conditions for real solutions, and vanishing of a polynomial) in good bit complexity. Among the critical points, those that are minimal---a property governed by inequalities on the moduli of the coordinates---typically determine the dominant asymptotics of the diagonal coefficient sequence. When the Taylor expansion at the origin has all non-negative coefficients (known as the `combinatorial case') and under regularity conditions, we utilize this Kronecker representation to determine probabilistically the minimal critical points in complexity singly exponential in the degree of the denominator, with good control over the exponent in the bit complexity estimate. Generically in the combinatorial case, this allows one to automatically and rigorously determine asymptotics for the diagonal coefficient sequence. Examples obtained with a preliminary implementation show the wide applicability of this approach.Comment: As accepted to proceedings of ISSAC 201

    Tropical algebraic geometry in Maple: A preprocessing algorithm for finding common factors for multivariate polynomials with approximate coefficients

    Get PDF
    AbstractFinding a common factor of two multivariate polynomials with approximate coefficients is a problem in symbolic–numeric computing. Taking a tropical view of this problem leads to efficient preprocessing techniques, applying polyhedral methods to the exact exponents and numerical techniques to the approximate coefficients. With Maple we will illustrate our use of tropical algebraic geometry

    A computer algebra user interface manifesto

    Full text link
    Many computer algebra systems have more than 1000 built-in functions, making expertise difficult. Using mock dialog boxes, this article describes a proposed interactive general-purpose wizard for organizing optional transformations and allowing easy fine grain control over the form of the result even by amateurs. This wizard integrates ideas including: * flexible subexpression selection; * complete control over the ordering of variables and commutative operands, with well-chosen defaults; * interleaving the choice of successively less main variables with applicable function choices to provide detailed control without incurring a combinatorial number of applicable alternatives at any one level; * quick applicability tests to reduce the listing of inapplicable transformations; * using an organizing principle to order the alternatives in a helpful manner; * labeling quickly-computed alternatives in dialog boxes with a preview of their results, * using ellipsis elisions if necessary or helpful; * allowing the user to retreat from a sequence of choices to explore other branches of the tree of alternatives or to return quickly to branches already visited; * allowing the user to accumulate more than one of the alternative forms; * integrating direct manipulation into the wizard; and * supporting not only the usual input-result pair mode, but also the useful alternative derivational and in situ replacement modes in a unified window.Comment: 38 pages, 12 figures, to be published in Communications in Computer Algebr

    Solving the "Isomorphism of Polynomials with Two Secrets" Problem for all Pairs of Quadratic Forms

    Full text link
    We study the Isomorphism of Polynomial (IP2S) problem with m=2 homogeneous quadratic polynomials of n variables over a finite field of odd characteristic: given two quadratic polynomials (a, b) on n variables, we find two bijective linear maps (s,t) such that b=t . a . s. We give an algorithm computing s and t in time complexity O~(n^4) for all instances, and O~(n^3) in a dominant set of instances. The IP2S problem was introduced in cryptography by Patarin back in 1996. The special case of this problem when t is the identity is called the isomorphism with one secret (IP1S) problem. Generic algebraic equation solvers (for example using Gr\"obner bases) solve quite well random instances of the IP1S problem. For the particular cyclic instances of IP1S, a cubic-time algorithm was later given and explained in terms of pencils of quadratic forms over all finite fields; in particular, the cyclic IP1S problem in odd characteristic reduces to the computation of the square root of a matrix. We give here an algorithm solving all cases of the IP1S problem in odd characteristic using two new tools, the Kronecker form for a singular quadratic pencil, and the reduction of bilinear forms over a non-commutative algebra. Finally, we show that the second secret in the IP2S problem may be recovered in cubic time

    The Design and Implementation of a High-Performance Polynomial System Solver

    Get PDF
    This thesis examines the algorithmic and practical challenges of solving systems of polynomial equations. We discuss the design and implementation of triangular decomposition to solve polynomials systems exactly by means of symbolic computation. Incremental triangular decomposition solves one equation from the input list of polynomials at a time. Each step may produce several different components (points, curves, surfaces, etc.) of the solution set. Independent components imply that the solving process may proceed on each component concurrently. This so-called component-level parallelism is a theoretical and practical challenge characterized by irregular parallelism. Parallelism is not an algorithmic property but rather a geometrical property of the particular input system’s solution set. Despite these challenges, we have effectively applied parallel computing to triangular decomposition through the layering and cooperation of many parallel code regions. This parallel computing is supported by our generic object-oriented framework based on the dynamic multithreading paradigm. Meanwhile, the required polynomial algebra is sup- ported by an object-oriented framework for algebraic types which allows type safety and mathematical correctness to be determined at compile-time. Our software is implemented in C/C++ and have extensively tested the implementation for correctness and performance on over 3000 polynomial systems that have arisen in practice. The parallel framework has been re-used in the implementation of Hensel factorization as a parallel pipeline to compute roots of a polynomial with multivariate power series coefficients. Hensel factorization is one step toward computing the non-trivial limit points of quasi-components

    Aspects of p-adic computation

    Get PDF

    Solving Degree Bounds for Iterated Polynomial Systems

    Get PDF
    For Arithmetization-Oriented ciphers and hash functions Gröbner basis attacks are generally considered as the most competitive attack vector. Unfortunately, the complexity of Gröbner basis algorithms is only understood for special cases, and it is needless to say that these cases do not apply to most cryptographic polynomial systems. Therefore, cryptographers have to resort to experiments, extrapolations and hypotheses to assess the security of their designs. One established measure to quantify the complexity of linear algebra-based Gröbner basis algorithms is the so-called solving degree. Caminata & Gorla revealed that under a certain genericity condition on a polynomial system the solving degree is always upper bounded by the Castelnuovo-Mumford regularity and henceforth by the Macaulay bound, which only takes the degrees and number of variables of the input polynomials into account. In this paper we extend their framework to iterated polynomial systems, the standard polynomial model for symmetric ciphers and hash functions. In particular, we prove solving degree bounds for various attacks on MiMC, Feistel-MiMC, Feistel-MiMC-Hash, Hades and GMiMC. Our bounds fall in line with the hypothesized complexity of Gröbner basis attacks on these designs, and to the best of our knowledge this is the first time that a mathematical proof for these complexities is provided. Moreover, by studying polynomials with degree falls we can prove lower bounds on the Castelnuovo-Mumford regularity for attacks on MiMC, Feistel-MiMC and Feistel-MiMCHash provided that only a few solutions of the corresponding iterated polynomial system originate from the base field. Hence, regularity-based solving degree estimations can never surpass a certain threshold, a desirable property for cryptographic polynomial systems

    Changing representation of curves and surfaces: exact and approximate methods

    Get PDF
    Το κύριο αντικείμενο μελέτης στην παρούσα διατριβή είναι η αλλαγή αναπαράστασης γεωμετρικών αντικειμένων από παραμετρική σε αλγεβρική (ή πεπλεγμένη) μορφή. Υπολογίζουμε την αλγεβρική εξίσωση παρεμβάλλοντας τους άγνωστους συντελεστές του πολυωνύμου δεδομένου ενός υπερσυνόλου των μονωνύμων του. Το τελευταίο υπολογίζεται απο το Newton πολύτοπο της αλγεβρικής εξίσωσης που υπολογίζεται από μια πρόσφατη μέθοδο πρόβλεψης του συνόλου στήριξης της εξίσωσης. H μέθοδος πρόβλεψης του συνόλου στήριξης βασίζεται στην αραιή (ή τορική) απαλοιφή: το πολύτοπο υπολογίζεται από το Newton πολύτοπο της αραιής απαλοίφουσας αν θεωρίσουμε την παραμετροποίηση ως πολυωνυμικό σύστημα. Στα μονώνυμα που αντιστοιχούν στα ακέραια σημεία του Newton πολυτόπου δίνονται τιμές ώστε να σχηματίσουν έναν αριθμητικό πίνακα. Ο πυρήνα του πίνακα αυτού, διάστασης 1 σε ιδανική περίπτωση, περιέχει τους συντελεστές των μονωνύμων στην αλγεβρική εξίσωση. Υπολογίζουμε τον πυρήνα του πίνακα είτε συμβολικά είτε αριθμητικά εφαρμόζοντας την μέθοδο του singular value decomposition (SVD). Προτείνουμε τεχνικές για να διαχειριστούμε την περίπτωση ενός πολυδιάστατου πυρήνα το οποίο εμφανίζεται όταν το προβλεπόμενο σύνολο στήριξης είναι ένα υπερσύνολο του πραγματικού. Αυτό δίνει έναν αποτελεσματικό ευαίσθητο-εξόδου αλγόριθμο υπολογισμού της αλγεβρικής εξίσωσης. Συγκρίνουμε διαφορετικές προσεγγίσεις κατασκευής του πίνακα μέσω των λογισμικών Maple και SAGE. Στα πειράματά μας χρησιμοποιήθηκαν ρητές καμπύλες και επιφάνειες καθώς και NURBS. Η μέθοδός μας μπορεί να εφαρμοστεί σε πολυώνυμα ή ρητές παραμετροποιήσεις επίπεδων καμπυλών ή (υπερ)επιφανειών οποιασδήποτε διάστασης συμπεριλαμβανομένων και των περιπτώσεων με παραμετροποίηση σεσημεία βάσης που εγείρουν σημαντικά ζητήματα για άλλες μεθόδους αλγεβρικοποίησης. Η μέθοδος έχει τον εξής περιορισμό: τα γεωμετρικά αντικείμενα πρέπει να αναπαριστώνται από βάσεις μονωνύμων που στην περίπτωση τριγωνομετρικών παραμετροποιήσεων θα πρέπει να μπορούν να μετασχηματιστούν σε ρητές συναρτήσεις. Επιπλέον η τεχνική που προτείνουμε μπορεί να εφαρμοστεί σε μη γεωμετρικά προβλήματα όπως ο υπολογισμόςτης διακρίνουσας ενός πολυωνύμου με πολλές μεταβλητές ή της απαλοίφουσας ενός συστήματος πολυωνύμων με πολλές μεταβλητές.The main object of study in our dissertation is the representation change of the geometric objects from the parametric form to implicit. We compute the implicit equation interpolating the unknown coefficients of the implicit polynomial given a superset of its monomials. The latter is derived from the Newton polytope of the implicit equation obtained by the recently developed method for support prediction. The support prediction method we use relies on sparse (or toric) elimination: the implicit polytope is obtained from the Newton polytope of the sparse resultant of the system in parametrization, represented as polynomials. The monomials that correspond to the lattice points of the Newton polytope are suitably evaluated to build a numeric matrix, ideally of corank 1. Its kernel contains their coefficients in the implicit equation. We compute kernel of the matrix either symbolically, or numerically, applying singular value decomposition (SVD). We propose techniques for handling the case of the multidimensional kernel space, caused by the predicted support being a superset of the actual. This yields an efficient, output-sensitive algorithm for computing the implicit equation. We compare different approaches for constructing the matrix in Maple and SAGE software. In our experiments we have used classical algebraic curves and surfaces as well as NURBS. Our method can be applied to polynomial or rational parametrizations of planar curves or (hyper)surfaces of any dimension including cases of parameterizations with base points which raise important issues for other implicitization methods. The method has its limits: geometric objects have to be presented using monomial basis; in the case of trigonometric parametrizations they have to be convertible to rational functions. Moreover, the proposed technique can be applied for nongeometric problems such as the computation of the discriminant of a multivariate polynomial or the resultant of a system of multivariate polynomials
    corecore