7 research outputs found
Nearly Optimal Sparse Polynomial Multiplication
In the sparse polynomial multiplication problem, one is asked to multiply two
sparse polynomials f and g in time that is proportional to the size of the
input plus the size of the output. The polynomials are given via lists of their
coefficients F and G, respectively. Cole and Hariharan (STOC 02) have given a
nearly optimal algorithm when the coefficients are positive, and Arnold and
Roche (ISSAC 15) devised an algorithm running in time proportional to the
"structural sparsity" of the product, i.e. the set supp(F)+supp(G). The latter
algorithm is particularly efficient when there not "too many cancellations" of
coefficients in the product. In this work we give a clean, nearly optimal
algorithm for the sparse polynomial multiplication problem.Comment: Accepted to IEEE Transactions on Information Theor
Essentially Optimal Sparse Polynomial Multiplication
We present a probabilistic algorithm to compute the product of two univariate
sparse polynomials over a field with a number of bit operations that is
quasi-linear in the size of the input and the output. Our algorithm works for
any field of characteristic zero or larger than the degree. We mainly rely on
sparse interpolation and on a new algorithm for verifying a sparse product that
has also a quasi-linear time complexity. Using Kronecker substitution
techniques we extend our result to the multivariate case.Comment: 12 page
On exact division and divisibility testing for sparse polynomials
No polynomial-time algorithm is known to test whether a sparse polynomial G
divides another sparse polynomial . While computing the quotient Q=F quo G
can be done in polynomial time with respect to the sparsities of F, G and Q,
this is not yet sufficient to get a polynomial-time divisibility test in
general. Indeed, the sparsity of the quotient Q can be exponentially larger
than the ones of F and G. In the favorable case where the sparsity #Q of the
quotient is polynomial, the best known algorithm to compute Q has a non-linear
factor #G#Q in the complexity, which is not optimal.
In this work, we are interested in the two aspects of this problem. First, we
propose a new randomized algorithm that computes the quotient of two sparse
polynomials when the division is exact. Its complexity is quasi-linear in the
sparsities of F, G and Q. Our approach relies on sparse interpolation and it
works over any finite field or the ring of integers. Then, as a step toward
faster divisibility testing, we provide a new polynomial-time algorithm when
the divisor has a specific shape. More precisely, we reduce the problem to
finding a polynomial S such that QS is sparse and testing divisibility by S can
be done in polynomial time. We identify some structure patterns in the divisor
G for which we can efficiently compute such a polynomial~S
On the Bit Complexity of Solving Bilinear Polynomial Systems
International audienceWe bound the Boolean complexity of computing isolating hyperboxes for all complex roots of systems of bilinear polynomials. The resultant of such systems admits a family of determinantal Sylvester-type formulas, which we make explicit by means of homological complexes. The computation of the determinant of the resultant matrix is a bottleneck for the overall complexity. We exploit the quasi-Toeplitz structure to reduce the problem to efficient matrix-vector products, corresponding to multivariate polynomial multiplication. For zero-dimensional systems, we arrive at a primitive element and a rational univariate representation of the roots. The overall bit complexity of our probabilistic algorithm is O_B(n^4 D^4 + n^2 D^4 Ï), where n is the number of variables, D equals the bilinear Bezout bound, and Ï is the maximum coefficient bitsize. Finally, a careful infinitesimal symbolic perturbation of the system allows us to treat degenerate and positive dimensional systems, thus making our algorithms and complexity analysis applicable to the general case
Multilinear Polynomial Systems: Root Isolation and Bit Complexity
Special Issue of the Journal of Symbolic Computation on Milestones in Computer Algebra (MICA 2016)International audienceWe exploit structure in polynomial system solving by considering polyno-mials that are linear in subsets of the variables. We focus on algorithms and their Boolean complexity for computing isolating hyperboxes for all the isolated complex roots of well-constrained, unmixed systems of multilinear polynomials based on resultant methods. We enumerate all expressions of the multihomogeneous (or multigraded) resultant of such systems as a determinant of Sylvester-like matrices, aka generalized Sylvester matrices. We construct these matrices by means of Weyman homological complexes, which generalize the Cayley-Koszul complex. The computation of the determinant of the resultant matrix is the bottleneck for the overall complexity. We exploit the quasi-Toeplitz structure to reduce the problem to efficient matrix-vector multiplication, which corresponds to multivariate polynomial multiplication, by extending the seminal work on Macaulay matrices of Canny, Kaltofen, and Yagati [9] to the multi-homogeneous case. We compute a rational univariate representation of the roots, based on the primitive element method. In the case of 0-dimensional systems we present a Monte Carlo algorithm with probability of success 1 â 1/2^r, for a given r â„ 1, and bit complexity O_B (n^2 D^(4+e) (n^(N +1) + Ï) + n D^(2+e) r (D +r)) for any e> 0, where n is the number of variables, D equals the multilinear BĂ©zout bound, N is the number of variable subsets, and Ï is the maximum coefficient bitsize. We present an algorithmic variant to compute the isolated roots of overdetermined and positive-dimensional systems. Thus our algorithms and complexity analysis apply in general with no assumptions on the input
Sparse Polynomial Interpolation and Testing
Interpolation is the process of learning an unknown polynomial f from some set of its evaluations. We consider the interpolation of a sparse polynomial, i.e., where f is comprised of a small, bounded number of terms. Sparse interpolation dates back to work in the late 18th century by the French mathematician Gaspard de Prony, and was revitalized in the 1980s due to advancements by Ben-Or and Tiwari, Blahut, and Zippel, amongst others. Sparse interpolation has applications to learning theory, signal processing, error-correcting codes, and symbolic computation. Closely related to sparse interpolation are two decision problems. Sparse polynomial identity testing is the problem of testing whether a sparse polynomial f is zero from its evaluations. Sparsity testing is the problem of testing whether f is in fact sparse.
We present effective probabilistic algebraic algorithms for the interpolation and testing of sparse polynomials. These algorithms assume black-box evaluation access, whereby the algorithm may specify the evaluation points. We measure algorithmic costs with respect to the number and types of queries to a black-box oracle.
Building on previous work by GargâSchost and GiesbrechtâRoche, we present two methods for the interpolation of a sparse polynomial modelled by a straight-line program (SLP): a sequence of arithmetic instructions. We present probabilistic algorithms for the sparse interpolation of an SLP, with cost softly-linear in the sparsity of the interpolant: its number of nonzero terms. As an application of these techniques, we give a multiplication algorithm for sparse polynomials, with cost that is sensitive to the size of the output.
Multivariate interpolation reduces to univariate interpolation by way of Kronecker substitu- tion, which maps an n-variate polynomial f to a univariate image with degree exponential in n. We present an alternative method of randomized Kronecker substitutions, whereby one can more efficiently reconstruct a sparse interpolant f from multiple univariate images of considerably reduced degree.
In error-correcting interpolation, we suppose that some bounded number of evaluations may be erroneous. We present an algorithm for error-correcting interpolation of polynomials that are sparse under the Chebyshev basis. In addition we give a method which reduces sparse Chebyshev-basis interpolation to monomial-basis interpolation.
Lastly, we study the class of Boolean functions that admit a sparse Fourier representation. We give an analysis of Levinâs Sparse Fourier Transform algorithm for such functions. Moreover, we give a new algorithm for testing whether a Boolean function is Fourier-sparse. This method reduces sparsity testing to homomorphism testing, which in turn may be solved by the BlumâLubyâRubinfeld linearity test