6 research outputs found

    Note on Integer Factoring Methods IV

    Get PDF
    This note continues the theoretical development of deterministic integer factorization algorithms based on systems of polynomials equations. The main result establishes a new deterministic time complexity bench mark in integer factorization.Comment: 20 Pages, New Versio

    Les bases de Groebner et les ordres monomiaux

    Get PDF
    Ce mémoire se veut une étude détaillée de ce que sont les bases de Groebner, de la manière dont on les calcule et dans quels cas elles sont utiles et utilisées. Un éventail de définitions, de théorèmes, de lemmes et de propositions sont énoncés et démontrés afin que les lecteurs, lectrices, \ud intéressé(e)s, puissent avoir les ressources nécessaires leur permettant de comprendre vraiment ce que sont les bases de Groebner. Ce travail propose également une définition précise de ce que sont les ordres monomiaux et élabore une formulation claire de leur classification. Ce mémoire donne, en plus, une description des algorithmes sous forme de procédures, programmés en utilisant le logiciel Maple 10 qui sont mis en annexe. Tous les algorithmes, décrits en pseudo-code, ont été programmés de manière naïve, c'est-à-dire, sans astuce de programmation afin d'en réduire le temps d'exécution ou l'espace mémoire occupé. Cela afin de faire voir aux lecteurs, lectrices, intéressé(e)s, comment se font les calculs. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Anneau, Idéal, Base de Groebner, Module, Ordre monomial, Ordre monoïdal

    Factoring Polynomials and Groebner Bases

    Get PDF
    Factoring polynomials is a central problem in computational algebra and number theory and is a basic routine in most computer algebra systems (e.g. Maple, Mathematica, Magma, etc). It has been extensively studied in the last few decades by many mathematicians and computer scientists. The main approaches include Berlekamp\u27s method (1967) based on the kernel of Frobenius map, Niederreiter\u27s method (1993) via an ordinary differential equation, Zassenhaus\u27s modular approach (1969), Lenstra, Lenstra and Lovasz\u27s lattice reduction (1982), and Gao\u27s method via a partial differential equation (2003). These methods and their recent improvements due to van Hoeij (2002) and Lecerf et al (2006--2007) provide efficient algorithms that are widely used in practice today. This thesis studies two issues on polynomial factorization. One is to improve the efficiency of modular approach for factoring bivariate polynomials over finite fields. The usual modular approach first solves a modular linear equation (from Berlekamp\u27s equation or Niederreiter\u27s differential equation), then performs Hensel lifting of modular factors, and finally finds right combinations. An alternative method is presented in this thesis that performs Hensel lifting at the linear algebra stage instead of lifting modular factors. In this way, there is no need to find the right combinations of modular factors, and it finds instead the right linear space from which the irreducible factors can be computed via gcd. The main advantage of this method is that extra solutions can be eliminated at the early stage of computation, so improving on previous Hensel lifting methods. Another issue is about whether random numbers are essential in designing efficient algorithms for factoring polynomials. Although polynomials can be quickly factored by randomized polynomial time algorithms in practice, it is still an open problem whether there exists any deterministic polynomial time algorithm, even if generalized Riemann hypothesis (GRH) is assumed. The deterministic complexity of factoring polynomials is studied here from a different point of view that is more geometric and combinatorial in nature. Tools used include Gr\u27{o}bner basis structure theory and graphs, with connections to combinatorial designs. It is shown how to compute deterministically new Gr\u27{o}bner bases from given G\u27{o}bner bases when new polynomials are added, with running time polynomial in the degree of the original ideals. Also, a new upper bound is given on the number of ring extensions needed for finding proper factors, improving on previous results of Evdokimov (1994) and Ivanyos, Karpinski and Saxena (2008)

    Construction of ordinary irreducible representations of finite groups

    Get PDF

    High Performance Sparse Multivariate Polynomials: Fundamental Data Structures and Algorithms

    Get PDF
    Polynomials may be represented sparsely in an effort to conserve memory usage and provide a succinct and natural representation. Moreover, polynomials which are themselves sparse – have very few non-zero terms – will have wasted memory and computation time if represented, and operated on, densely. This waste is exacerbated as the number of variables increases. We provide practical implementations of sparse multivariate data structures focused on data locality and cache complexity. We look to develop high-performance algorithms and implementations of fundamental polynomial operations, using these sparse data structures, such as arithmetic (addition, subtraction, multiplication, and division) and interpolation. We revisit a sparse arithmetic scheme introduced by Johnson in 1974, adapting and optimizing these algorithms for modern computer architectures, with our implementations over the integers and rational numbers vastly outperforming the current wide-spread implementations. We develop a new algorithm for sparse pseudo-division based on the sparse polynomial division algorithm, with very encouraging results. Polynomial interpolation is explored through univariate, dense multivariate, and sparse multivariate methods. Arithmetic and interpolation together form a solid high-performance foundation from which many higher-level and more interesting algorithms can be built

    Some Algorithms for Learning with Errors

    Get PDF
    corecore