12,487 research outputs found
On the Boolean complexity of real root refinement
International audienceWe assume that a real square-free polynomial has a degree , a maximum coefficient bitsize and a real root lying in an isolating interval and having no nonreal roots nearby (we quantify this assumption). Then, we combine the {\em Double Exponential Sieve} algorithm (also called the {\em Bisection of the Exponents}), the bisection, and Newton iteration to decrease the width of this inclusion interval by a factor of . The algorithm has Boolean complexity . Our algorithms support the same complexity bound for the refinement of roots, for any
New Acceleration of Nearly Optimal Univariate Polynomial Root-findERS
Univariate polynomial root-finding has been studied for four millennia and is
still the subject of intensive research. Hundreds of efficient algorithms for
this task have been proposed. Two of them are nearly optimal. The first one,
proposed in 1995, relies on recursive factorization of a polynomial, is quite
involved, and has never been implemented. The second one, proposed in 2016,
relies on subdivision iterations, was implemented in 2018, and promises to be
practically competitive, although user's current choice for univariate
polynomial root-finding is the package MPSolve, proposed in 2000, revised in
2014, and based on Ehrlich's functional iterations. By proposing and
incorporating some novel techniques we significantly accelerate both
subdivision and Ehrlich's iterations. Moreover our acceleration of the known
subdivision root-finders is dramatic in the case of sparse input polynomials.
Our techniques can be of some independent interest for the design and analysis
of polynomial root-finders.Comment: 89 pages, 5 figures, 2 table
Nearly Optimal Refinement of Real Roots of a Univariate Polynomial
International audienceWe assume that a real square-free polynomial has a degree , a maximum coefficient bitsize and a real root lying in an isolating interval and having no nonreal roots nearby (we quantify this assumption). Then we combine the {\em Double Exponential Sieve} algorithm (also called the {\em Bisection of the Exponents}), the bisection, and Newton iteration to decrease the width of this inclusion interval by a factor of . The algorithm has Boolean complexity . This substantially decreases the known bound and is optimal up to a polylogarithmic factor. Furthermore we readily extend our algorithm to support the same upper bound on the complexity of the refinement of real roots, for any , by incorporating the known efficient algorithms for multipoint polynomial evaluation. The main ingredient for the latter is an efficient algorithm for (approximate) polynomial division; we present a variation based on structured matrix computation with quasi-optimal Boolean complexity
Simple and Nearly Optimal Polynomial Root-finding by Means of Root Radii Approximation
We propose a new simple but nearly optimal algorithm for the approximation of
all sufficiently well isolated complex roots and root clusters of a univariate
polynomial. Quite typically the known root-finders at first compute some crude
but reasonably good approximations to well-conditioned roots (that is, those
isolated from the other roots) and then refine the approximations very fast, by
using Boolean time which is nearly optimal, up to a polylogarithmic factor. By
combining and extending some old root-finding techniques, the geometry of the
complex plane, and randomized parametrization, we accelerate the initial stage
of obtaining crude to all well-conditioned simple and multiple roots as well as
isolated root clusters. Our algorithm performs this stage at a Boolean cost
dominated by the nearly optimal cost of subsequent refinement of these
approximations, which we can perform concurrently, with minimum processor
communication and synchronization. Our techniques are quite simple and
elementary; their power and application range may increase in their combination
with the known efficient root-finding methods.Comment: 12 pages, 1 figur
Nearly Optimal Computations with Structured Matrices
We estimate the Boolean complexity of multiplication of structured matrices
by a vector and the solution of nonsingular linear systems of equations with
these matrices. We study four basic most popular classes, that is, Toeplitz,
Hankel, Cauchy and Van-der-monde matrices, for which the cited computational
problems are equivalent to the task of polynomial multiplication and division
and polynomial and rational multipoint evaluation and interpolation. The
Boolean cost estimates for the latter problems have been obtained by Kirrinnis
in \cite{kirrinnis-joc-1998}, except for rational interpolation, which we
supply now. All known Boolean cost estimates for these problems rely on using
Kronecker product. This implies the -fold precision increase for the -th
degree output, but we avoid such an increase by relying on distinct techniques
based on employing FFT. Furthermore we simplify the analysis and make it more
transparent by combining the representation of our tasks and algorithms in
terms of both structured matrices and polynomials and rational functions. This
also enables further extensions of our estimates to cover Trummer's important
problem and computations with the popular classes of structured matrices that
generalize the four cited basic matrix classes.Comment: (2014-04-10
- …