681 research outputs found
Computational linear algebra over finite fields
We present here algorithms for efficient computation of linear algebra
problems over finite fields
Essentially Optimal Sparse Polynomial Multiplication
We present a probabilistic algorithm to compute the product of two univariate
sparse polynomials over a field with a number of bit operations that is
quasi-linear in the size of the input and the output. Our algorithm works for
any field of characteristic zero or larger than the degree. We mainly rely on
sparse interpolation and on a new algorithm for verifying a sparse product that
has also a quasi-linear time complexity. Using Kronecker substitution
techniques we extend our result to the multivariate case.Comment: 12 page
A history of Galois fields
This paper stresses a specific line of development of the notion of finite field, from Évariste Galois’s 1830 “Note sur la théorie des nombres,” and Camille Jordan’s 1870 Traité des substitutions et des équations algébriques, to Leonard Dickson’s 1901 Linear groups with an exposition of the Galois theory. This line of development highlights the key role played by some specific algebraic procedures. These intrinsically interlaced the indexations provided by Galois’s number-theoretic imaginaries with decompositions of the analytic representations of linear substitutions. Moreover, these procedures shed light on a key aspect of Galois’s works that had received little attention until now. The methodology of the present paper is based on investigations of intertextual references for identifying some specific collective dimensions of mathematics. We shall take as a starting point a coherent network of texts that were published mostly in France and in the U.S.A. from 1893 to 1907 (the “Galois fields network,” for short). The main shared references in this corpus were some texts published in France over the course of the 19th century, especially by Galois, Hermite, Mathieu, Serret, and Jordan. The issue of the collective dimensions underlying this network is thus especially intriguing. Indeed, the historiography of algebra has often put to the fore some specific approaches developed in Germany, with little attention to works published in France. Moreover, the “German abstract algebra” has been considered to have strongly influenced the development of the American mathematical community. Actually, this influence has precisely been illustrated by the example of Elliakim Hasting Moore’s lecture on “abstract Galois fields” at the Chicago congress in 1893. To be sure, this intriguing situation raises some issues of circulations of knowledge from Paris to Chicago. It also calls for reflection on the articulations between the individual and the collective dimensions of mathematics. Such articulations have often been analysed by appealing to categories such as nations, disciplines, or institutions (e.g., the “German algebra,” the “Chicago algebraic research school”). Yet, we shall see that these categories fail to characterize an important specific approach to Galois fields. The coherence of the Galois fields network had underlying it some collective interest for “linear groups in Galois fields.” Yet, the latter designation was less pointing to a theory, or a discipline, revolving around a specific object, i.e. Gln(Fpn) (p a prime number), than to some specific procedures. In modern parlance, general linear groups in Galois fields were introduced in this context as the maximal group in which an elementary abelian group (i.e., the multiplicative group of a Galois field) is a normal subgroup. The Galois fields network was actually rooted on a specific algebraic culture that had developed over the course of the 19th century. We shall see that this shared culture resulted from the circulation of some specific algebraic procedures of decompositions of polynomial representations of substitutions
Change of basis for m-primary ideals in one and two variables
Following recent work by van der Hoeven and Lecerf (ISSAC 2017), we discuss
the complexity of linear mappings, called untangling and tangling by those
authors, that arise in the context of computations with univariate polynomials.
We give a slightly faster tangling algorithm and discuss new applications of
these techniques. We show how to extend these ideas to bivariate settings, and
use them to give bounds on the arithmetic complexity of certain algebras.Comment: In Proceedings ISSAC'19, ACM, New York, USA. See proceedings version
for final formattin
Fast Computation of Special Resultants
We propose fast algorithms for computing composed products and composed sums, as well as diamond products of univariate polynomials. These operations correspond to special multivariate resultants, that we compute using power sums of roots of polynomials, by means of their generating series
Sparse Polynomial Interpolation and Testing
Interpolation is the process of learning an unknown polynomial f from some set of its evaluations. We consider the interpolation of a sparse polynomial, i.e., where f is comprised of a small, bounded number of terms. Sparse interpolation dates back to work in the late 18th century by the French mathematician Gaspard de Prony, and was revitalized in the 1980s due to advancements by Ben-Or and Tiwari, Blahut, and Zippel, amongst others. Sparse interpolation has applications to learning theory, signal processing, error-correcting codes, and symbolic computation. Closely related to sparse interpolation are two decision problems. Sparse polynomial identity testing is the problem of testing whether a sparse polynomial f is zero from its evaluations. Sparsity testing is the problem of testing whether f is in fact sparse.
We present effective probabilistic algebraic algorithms for the interpolation and testing of sparse polynomials. These algorithms assume black-box evaluation access, whereby the algorithm may specify the evaluation points. We measure algorithmic costs with respect to the number and types of queries to a black-box oracle.
Building on previous work by Garg–Schost and Giesbrecht–Roche, we present two methods for the interpolation of a sparse polynomial modelled by a straight-line program (SLP): a sequence of arithmetic instructions. We present probabilistic algorithms for the sparse interpolation of an SLP, with cost softly-linear in the sparsity of the interpolant: its number of nonzero terms. As an application of these techniques, we give a multiplication algorithm for sparse polynomials, with cost that is sensitive to the size of the output.
Multivariate interpolation reduces to univariate interpolation by way of Kronecker substitu- tion, which maps an n-variate polynomial f to a univariate image with degree exponential in n. We present an alternative method of randomized Kronecker substitutions, whereby one can more efficiently reconstruct a sparse interpolant f from multiple univariate images of considerably reduced degree.
In error-correcting interpolation, we suppose that some bounded number of evaluations may be erroneous. We present an algorithm for error-correcting interpolation of polynomials that are sparse under the Chebyshev basis. In addition we give a method which reduces sparse Chebyshev-basis interpolation to monomial-basis interpolation.
Lastly, we study the class of Boolean functions that admit a sparse Fourier representation. We give an analysis of Levin’s Sparse Fourier Transform algorithm for such functions. Moreover, we give a new algorithm for testing whether a Boolean function is Fourier-sparse. This method reduces sparsity testing to homomorphism testing, which in turn may be solved by the Blum–Luby–Rubinfeld linearity test
- …