11 research outputs found

    Numerical computation of the roots of Mandelbrot polynomials: an experimental analysis

    Full text link
    This paper deals with the problem of numerically computing the roots of polynomials pk(x)p_k(x), k=1,2,k=1,2,\ldots, of degree n=2k1n=2^k-1 recursively defined by p1(x)=x+1p_1(x)=x+1, pk(x)=xpk1(x)2+1p_k(x)=xp_{k-1}(x)^2+1. An algorithm based on the Ehrlich-Aberth simultaneous iterations complemented by the Fast Multipole Method, and by the fast search of near neighbors of a set of complex numbers, is provided. The algorithm has a cost of O(nlogn)O(n\log n) arithmetic operations per step. A Fortran 95 implementation is given and numerical experiments are carried out. Experimentally, it turns out that the number of iterations needed to arrive at numerical convergence is O(logn)O(\log n). This allows us to compute the roots of pk(x)p_k(x) up to degree n=2241n=2^{24}-1 in about 16 minutes on a laptop with 16 GB RAM, and up to degree n=2281n=2^{28}-1 in about one hour on a machine with 256 GB RAM. The case of degree n=2301n=2^{30}-1 would require higher memory and higher precision to separate the roots. With a suitable adaptation of FMM to the limit of 256 GB RAM and by performing the computation in extended precision (i.e. with 10-byte floating point representation) we were able to compute all the roots in about two weeks of CPU time for n=2301n=2^{30}-1. From the experimental analysis, explicit asymptotic expressions of the real roots of pk(x)p_k(x) and an explicit expression of minijξi(k)ξj(k)\min_{i\ne j}|\xi_i^{(k)}-\xi_j^{(k)}| for the roots ξi(k)\xi_i^{(k)} of pk(x)p_k(x) are deduced. The approach is extended to classes of polynomials defined by a doubling recurrence

    A family of simultaneous zero-finding methods

    Get PDF
    AbstractApplying Hansen-Patrick's formula for solving the single equation f(z) = 0 to a suitable function appearing in the classical Weierstrass' method, two one-parameter families of interation functions for the simultaneous approximation of all simple and multiple zeros of a polynomial are derived. It is shown that all the methods of these families have fourth-order of convergence. Some computational aspects of the proposed methods and numerical examples are given

    A family of root-finding methods with accelerated convergence

    Get PDF
    AbstractA parametric family of iterative methods for the simultaneous determination of simple complex zeros of a polynomial is considered. The convergence of the basic method of the fourth order is accelerated using Newton's and Halley's corrections thus generating total-step methods of orders five and six. Further improvements are obtained by applying the Gauss-Seidel approach. Accelerated convergence of all proposed methods is attained at the cost of a negligible number of additional operations. Detailed convergence analysis and two numerical examples are given

    New Acceleration of Nearly Optimal Univariate Polynomial Root-findERS

    Full text link
    Univariate polynomial root-finding has been studied for four millennia and is still the subject of intensive research. Hundreds of efficient algorithms for this task have been proposed. Two of them are nearly optimal. The first one, proposed in 1995, relies on recursive factorization of a polynomial, is quite involved, and has never been implemented. The second one, proposed in 2016, relies on subdivision iterations, was implemented in 2018, and promises to be practically competitive, although user's current choice for univariate polynomial root-finding is the package MPSolve, proposed in 2000, revised in 2014, and based on Ehrlich's functional iterations. By proposing and incorporating some novel techniques we significantly accelerate both subdivision and Ehrlich's iterations. Moreover our acceleration of the known subdivision root-finders is dramatic in the case of sparse input polynomials. Our techniques can be of some independent interest for the design and analysis of polynomial root-finders.Comment: 89 pages, 5 figures, 2 table

    Geometry of Polynomials and Root-Finding via Path-Lifting

    Full text link
    Using the interplay between topological, combinatorial, and geometric properties of polynomials and analytic results (primarily the covering structure and distortion estimates), we analyze a path-lifting method for finding approximate zeros, similar to those studied by Smale, Shub, Kim, and others. Given any polynomial, this simple algorithm always converges to a root, except on a finite set of initial points lying on a circle of a given radius. Specifically, the algorithm we analyze consists of iterating zf(z)tkf(z0)f(z)z - \frac{f(z)-t_kf(z_0)}{f'(z)} where the tkt_k form a decreasing sequence of real numbers and z0z_0 is chosen on a circle containing all the roots. We show that the number of iterates required to locate an approximate zero of a polynomial ff depends only on logf(z0)/ρζ\log|f(z_0)/\rho_\zeta| (where ρζ\rho_\zeta is the radius of convergence of the branch of f1f^{-1} taking 00 to a root ζ\zeta) and the logarithm of the angle between f(z0)f(z_0) and certain critical values. Previous complexity results for related algorithms depend linearly on the reciprocals of these angles. Note that the complexity of the algorithm does not depend directly on the degree of ff, but only on the geometry of the critical values. Furthermore, for any polynomial ff with distinct roots, the average number of steps required over all starting points taken on a circle containing all the roots is bounded by a constant times the average of log(1/ρζ)\log(1/\rho_\zeta). The average of log(1/ρζ)\log(1/\rho_\zeta) over all polynomials ff with dd roots in the unit disk is O(d){\mathcal{O}}({d}). This algorithm readily generalizes to finding all roots of a polynomial (without deflation); doing so increases the complexity by a factor of at most dd.Comment: 44 pages, 12 figure

    Computing Real Roots of Real Polynomials

    Full text link
    Computing the roots of a univariate polynomial is a fundamental and long-studied problem of computational algebra with applications in mathematics, engineering, computer science, and the natural sciences. For isolating as well as for approximating all complex roots, the best algorithm known is based on an almost optimal method for approximate polynomial factorization, introduced by Pan in 2002. Pan's factorization algorithm goes back to the splitting circle method from Schoenhage in 1982. The main drawbacks of Pan's method are that it is quite involved and that all roots have to be computed at the same time. For the important special case, where only the real roots have to be computed, much simpler methods are used in practice; however, they considerably lag behind Pan's method with respect to complexity. In this paper, we resolve this discrepancy by introducing a hybrid of the Descartes method and Newton iteration, denoted ANEWDSC, which is simpler than Pan's method, but achieves a run-time comparable to it. Our algorithm computes isolating intervals for the real roots of any real square-free polynomial, given by an oracle that provides arbitrary good approximations of the polynomial's coefficients. ANEWDSC can also be used to only isolate the roots in a given interval and to refine the isolating intervals to an arbitrary small size; it achieves near optimal complexity for the latter task.Comment: to appear in the Journal of Symbolic Computatio

    Some applications of computer algebra and interval mathematics

    Get PDF
    This thesis contains some applications of Computer Algebra to unconstrained optimization and some applications of Interval Mathematics to the problem of simultaneously bounding the simple zeros of polynomials. Chapter 1 contains a brief introduction to Computer Algebra and Interval Mathematics, and several of the fundamental results from Interval Mathematics which are used in Chapters 4 and 5. Chapter 2 contains a survey of those features of the symbol manipulation package ALgLIB[Shew-85] which it is necessary to understand in order to use ALgLIB as explained in Chapter 3. Chapter 3 contains a description of Sisser's method [Sis-82a] for unconstrained minimization and several modifications thereof which are implemented using the pseudo-code of Dennis and Schnabel [DenS-83], and ALgLIB, Chapter 3 also contains numerical results corresponding to Sisser's method and its modifications for 7 examples. Chapter 4 contains a new algorithm PRSS for the simultaneous estimation of polynomial zeros and the corresponding interval form IRSS for simultaneously bounding real polynomial zeros. Comparisons are made with some related existing algorithms. Numerical results of the comparisons are also given in this chapter. Chapter 5 contains an application of an idea due to Neumaier [Neu-85] to the problem of constructing interval versions of point iterative procedures for the estimation of simple zeros of analytic functions. In particular, interval versions of some point iterative procedures for the simultaneous estimation of simple (complex) polynomial zeros are described. Finally, numerical results are given to show the efficiency of the new algorithm

    Computing Real Roots of Real Polynomials -- An Efficient Method Based on Descartes' Rule of Signs and Newton Iteration

    Get PDF
    Computing the real roots of a polynomial is a fundamental problem of computational algebra. We describe a variant of the Descartes method that isolates the real roots of any real square-free polynomial given through coefficient oracles. A coefficient oracle provides arbitrarily good approximations of the coefficients. The bit complexity of the algorithm matches the complexity of the best algorithm known, and the algorithm is simpler than this algorithm. The algorithm derives its speed from the combination of Descartes method with Newton iteration. Our algorithm can also be used to further refine the isolating intervals to an arbitrary small size. The complexity of root refinement is nearly optimal
    corecore