11 research outputs found
Numerical computation of the roots of Mandelbrot polynomials: an experimental analysis
This paper deals with the problem of numerically computing the roots of
polynomials , , of degree recursively defined
by , . An algorithm based on the
Ehrlich-Aberth simultaneous iterations complemented by the Fast Multipole
Method, and by the fast search of near neighbors of a set of complex numbers,
is provided. The algorithm has a cost of arithmetic operations per
step. A Fortran 95 implementation is given and numerical experiments are
carried out. Experimentally, it turns out that the number of iterations needed
to arrive at numerical convergence is . This allows us to compute
the roots of up to degree in about 16 minutes on a laptop
with 16 GB RAM, and up to degree in about one hour on a machine
with 256 GB RAM. The case of degree would require higher memory
and higher precision to separate the roots. With a suitable adaptation of FMM
to the limit of 256 GB RAM and by performing the computation in extended
precision (i.e. with 10-byte floating point representation) we were able to
compute all the roots in about two weeks of CPU time for . From the
experimental analysis, explicit asymptotic expressions of the real roots of
and an explicit expression of
for the roots of are deduced. The approach is extended
to classes of polynomials defined by a doubling recurrence
A family of simultaneous zero-finding methods
AbstractApplying Hansen-Patrick's formula for solving the single equation f(z) = 0 to a suitable function appearing in the classical Weierstrass' method, two one-parameter families of interation functions for the simultaneous approximation of all simple and multiple zeros of a polynomial are derived. It is shown that all the methods of these families have fourth-order of convergence. Some computational aspects of the proposed methods and numerical examples are given
A family of root-finding methods with accelerated convergence
AbstractA parametric family of iterative methods for the simultaneous determination of simple complex zeros of a polynomial is considered. The convergence of the basic method of the fourth order is accelerated using Newton's and Halley's corrections thus generating total-step methods of orders five and six. Further improvements are obtained by applying the Gauss-Seidel approach. Accelerated convergence of all proposed methods is attained at the cost of a negligible number of additional operations. Detailed convergence analysis and two numerical examples are given
New Acceleration of Nearly Optimal Univariate Polynomial Root-findERS
Univariate polynomial root-finding has been studied for four millennia and is
still the subject of intensive research. Hundreds of efficient algorithms for
this task have been proposed. Two of them are nearly optimal. The first one,
proposed in 1995, relies on recursive factorization of a polynomial, is quite
involved, and has never been implemented. The second one, proposed in 2016,
relies on subdivision iterations, was implemented in 2018, and promises to be
practically competitive, although user's current choice for univariate
polynomial root-finding is the package MPSolve, proposed in 2000, revised in
2014, and based on Ehrlich's functional iterations. By proposing and
incorporating some novel techniques we significantly accelerate both
subdivision and Ehrlich's iterations. Moreover our acceleration of the known
subdivision root-finders is dramatic in the case of sparse input polynomials.
Our techniques can be of some independent interest for the design and analysis
of polynomial root-finders.Comment: 89 pages, 5 figures, 2 table
Geometry of Polynomials and Root-Finding via Path-Lifting
Using the interplay between topological, combinatorial, and geometric
properties of polynomials and analytic results (primarily the covering
structure and distortion estimates), we analyze a path-lifting method for
finding approximate zeros, similar to those studied by Smale, Shub, Kim, and
others. Given any polynomial, this simple algorithm always converges to a root,
except on a finite set of initial points lying on a circle of a given radius.
Specifically, the algorithm we analyze consists of iterating where the form a decreasing sequence of
real numbers and is chosen on a circle containing all the roots. We show
that the number of iterates required to locate an approximate zero of a
polynomial depends only on (where is
the radius of convergence of the branch of taking to a root
) and the logarithm of the angle between and certain critical
values. Previous complexity results for related algorithms depend linearly on
the reciprocals of these angles. Note that the complexity of the algorithm does
not depend directly on the degree of , but only on the geometry of the
critical values.
Furthermore, for any polynomial with distinct roots, the average number
of steps required over all starting points taken on a circle containing all the
roots is bounded by a constant times the average of . The
average of over all polynomials with roots in the
unit disk is . This algorithm readily generalizes to
finding all roots of a polynomial (without deflation); doing so increases the
complexity by a factor of at most .Comment: 44 pages, 12 figure
Computing Real Roots of Real Polynomials
Computing the roots of a univariate polynomial is a fundamental and
long-studied problem of computational algebra with applications in mathematics,
engineering, computer science, and the natural sciences. For isolating as well
as for approximating all complex roots, the best algorithm known is based on an
almost optimal method for approximate polynomial factorization, introduced by
Pan in 2002. Pan's factorization algorithm goes back to the splitting circle
method from Schoenhage in 1982. The main drawbacks of Pan's method are that it
is quite involved and that all roots have to be computed at the same time. For
the important special case, where only the real roots have to be computed, much
simpler methods are used in practice; however, they considerably lag behind
Pan's method with respect to complexity.
In this paper, we resolve this discrepancy by introducing a hybrid of the
Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than
Pan's method, but achieves a run-time comparable to it. Our algorithm computes
isolating intervals for the real roots of any real square-free polynomial,
given by an oracle that provides arbitrary good approximations of the
polynomial's coefficients. ANEWDSC can also be used to only isolate the roots
in a given interval and to refine the isolating intervals to an arbitrary small
size; it achieves near optimal complexity for the latter task.Comment: to appear in the Journal of Symbolic Computatio
Some applications of computer algebra and interval mathematics
This thesis contains some applications of Computer Algebra to unconstrained optimization and some applications of Interval Mathematics to the problem of simultaneously bounding the simple zeros of polynomials. Chapter 1 contains a brief introduction to Computer Algebra and Interval Mathematics, and several of the fundamental results from Interval Mathematics which are used in Chapters 4 and 5. Chapter 2 contains a survey of those features of the symbol manipulation package ALgLIB[Shew-85] which it is necessary to understand in order to use ALgLIB as explained in Chapter 3. Chapter 3 contains a description of Sisser's method [Sis-82a] for unconstrained minimization and several modifications thereof which are implemented using the pseudo-code of Dennis and Schnabel [DenS-83], and ALgLIB, Chapter 3 also contains numerical results corresponding to Sisser's method and its modifications for 7 examples. Chapter 4 contains a new algorithm PRSS for the simultaneous estimation of polynomial zeros and the corresponding interval form IRSS for simultaneously bounding real polynomial zeros. Comparisons are made with some related existing algorithms. Numerical results of the comparisons are also given in this chapter. Chapter 5 contains an application of an idea due to Neumaier [Neu-85] to the problem of constructing interval versions of point iterative procedures for the estimation of simple zeros of analytic functions. In particular, interval versions of some point iterative procedures for the simultaneous estimation of simple (complex) polynomial zeros are described. Finally, numerical results are given to show the efficiency of the new algorithm
Computing Real Roots of Real Polynomials -- An Efficient Method Based on Descartes' Rule of Signs and Newton Iteration
Computing the real roots of a polynomial is a fundamental problem of computational algebra. We describe a variant of the Descartes method that isolates the real roots of any real square-free polynomial given through coefficient oracles. A coefficient oracle provides arbitrarily good approximations of the coefficients. The bit complexity of the algorithm matches the complexity of the best algorithm known, and the algorithm is simpler than this algorithm. The algorithm derives its speed from the combination of Descartes method with Newton iteration. Our algorithm can also be used to further refine the isolating intervals to an arbitrary small size. The complexity of root refinement is nearly optimal