13 research outputs found
Groebner.jl: A package for Gr\"obner bases computations in Julia
We introduce the Julia package Groebner.jl for computing Gr\"obner bases with
the F4 algorithm. Groebner.jl is an efficient, lightweight, portable,
thoroughly tested, and documented open-source software. The package works over
integers modulo a prime and over the rationals and supports various monomial
orderings. The implementation incorporates modern symbolic computation
techniques and leverages the Julia type system and tooling, which allows
Groebner.jl to be on par in performance with the leading computer algebra
systems. Our package is freely available at
https://github.com/sumiya11/Groebner.jl .Comment: 10 page
Digital Collections of Examples in Mathematical Sciences
Some aspects of Computer Algebra (notably Computation Group Theory and
Computational Number Theory) have some good databases of examples, typically of
the form "all the X up to size n". But most of the others, especially on the
polynomial side, are lacking such, despite the utility they have demonstrated
in the related fields of SAT and SMT solving. We claim that the field would be
enhanced by such community-maintained databases, rather than each author
hand-selecting a few, which are often too large or error-prone to print, and
therefore difficult for subsequent authors to reproduce.Comment: Presented at 8th European Congress of Mathematician
Ideals modulo p
The main focus of this paper is on the problem of relating an ideal I in the
polynomial ring Q[x_1,..., x_n] to a corresponding ideal in F_p[x_1, ..., x_n]
where p is a prime number; in other words, the reduction modulo p of I. We
define a new notion of sigma-good prime for I which depends on the term
ordering sigma, and show that all but finitely many primes are good for all
term orderings. We relate our notion of sigma-good primes to some other similar
notions already in the literature. One characteristic of our approach is that
enables us to detect some bad primes, a distinct advantage when using modular
methods
Parallel Arbitrary-precision Integer Arithmetic
Arbitrary-precision integer arithmetic computations are driven by applications in solving systems of polynomial equations and public-key cryptography. Such computations arise when high precision is required (with large input values that fit into multiple machine words), or to avoid coefficient overflow due to intermediate expression swell. Meanwhile, the growing demand for faster computation alongside the recent advances in the hardware technology have led to the development of a vast array of many-core and multi-core processors, accelerators, programming models, and language extensions (e.g. CUDA, OpenCL, and OpenACC for GPUs, and OpenMP and Cilk for multi-core CPUs). The massive computational power of parallel processors makes them attractive targets for carrying out arbitrary-precision integer arithmetic. At the same time, developing parallel algorithms, followed by implementing and optimizing them as multi-threaded parallel programs imposes a set of challenges. This work explains the current state of research on parallel arbitrary-precision integer arithmetic on GPUs and CPUs, and proposes a number of solutions for some of the challenging problems related to this subject
Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms
This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ârepeated computation with a possibility of premature terminationâ. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach.
The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languagesâone for the implementation and one for the interfaceâfor our implementation of computer algebra algorithms.
Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the âparallel penaltyâ. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods.
We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassenâs matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fractionâan approach, known from literature. We also performed execution time estimations of our divide and conquer programs.
This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called âparallel repeated computationâ, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the RabinâMiller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution.
The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the GauĂ elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms.
Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms
Algorithms in Intersection Theory in the Plane
This thesis presents an algorithm to find the local structure of intersections of plane curves. More precisely, we address the question of describing the scheme of the quotient ring of a bivariate zero-dimensional ideal , \textit{i.e.} finding the points (maximal ideals of ) and describing the regular functions on those points. A natural way to address this problem is via Gr\"obner bases as they reduce the problem of finding the points to a problem of factorisation, and the sheaf of rings of regular functions can be studied with those bases through the division algorithm and localisation.
Let be an ideal generated by , a subset of with and a field. We present an algorithm that features a quadratic convergence to find a Gr\"obner basis of or its primary component at the origin.
We introduce an -adic Newton iteration to lift the lexicographic Gr\"obner basis of any finite intersection of zero-dimensional primary components of if is a \textit{good} maximal ideal. It relies on a structural result about the syzygies in such a basis due to Conca \textit{\&} Valla (2008), from which arises an explicit map between ideals in a stratum (or Gr\"obner cell) and points in the associated moduli space. We also qualify what makes a maximal ideal suitable for our filtration.
When the field is \textit{large enough}, endowed with an Archimedean or ultrametric valuation, and admits a fraction reconstruction algorithm, we use this result to give a complete -adic algorithm to recover , the Gr\"obner basis of . We observe that previous results of Lazard that use Hermite normal forms to compute Gr\"obner bases for ideals with two generators can be generalised to a set of generators. We use this result to obtain a bound on the height of the coefficients of and to control the probability of choosing a \textit{good} maximal ideal to build the -adic expansion of .
Inspired by Pardue (1994), we also give a constructive proof to
characterise a Zariski open set of (with action on ) that changes coordinates in such a way as to ensure the initial term ideal of a zero-dimensional becomes Borel-fixed when is sufficiently large. This sharpens our analysis
to obtain, when or , a complexity less than cubic in terms of the dimension of and softly linear in the height of the coefficients of .
We adapt the resulting method and present the analysis to find the -primary component of . We also discuss the transition towards other primary components via linear mappings, called \emph{untangling} and \emph{tangling}, introduced by van der Hoeven and Lecerf (2017). The two maps form one isomorphism to find points with an isomorphic local structure and, at the origin, bind them. We give a slightly faster tangling algorithm and discuss new applications of these techniques. We show how to extend these ideas to bivariate settings and give a bound on the arithmetic complexity for certain algebras