9 research outputs found

    Over-constrained Weierstrass iteration and the nearest consistent system

    Full text link
    We propose a generalization of the Weierstrass iteration for over-constrained systems of equations and we prove that the proposed method is the Gauss-Newton iteration to find the nearest system which has at least kk common roots and which is obtained via a perturbation of prescribed structure. In the univariate case we show the connection of our method to the optimization problem formulated by Karmarkar and Lakshman for the nearest GCD. In the multivariate case we generalize the expressions of Karmarkar and Lakshman, and give explicitly several iteration functions to compute the optimum. The arithmetic complexity of the iterations is detailed

    Fast real and complex root-finding methods for well-conditioned polynomials

    Get PDF
    Given a polynomial pp of degree dd and a bound κ\kappa on a condition number of pp, we present the first root-finding algorithms that return all its real and complex roots with a number of bit operations quasi-linear in dlog2(κ)d \log^2(\kappa). More precisely, several condition numbers can be defined depending on the norm chosen on the coefficients of the polynomial. Let p(x)=_k=0da_kxk=_k=0d(dk)b_kxkp(x) = \sum\_{k=0}^d a\_k x^k = \sum\_{k=0}^d \sqrt{\binom d k} b\_k x^k. We call the condition number associated with a perturbation of the a_ka\_k the hyperbolic condition number κ_h\kappa\_h, and the one associated with a perturbation of the b_kb\_k the elliptic condition number κ_e\kappa\_e. For each of these condition numbers, we present algorithms that find the real and the complex roots of pp in O(dlog2(dκ) polylog(log(dκ)))O\left(d\log^2(d\kappa)\ \text{polylog}(\log(d\kappa))\right) bit operations.Our algorithms are well suited for random polynomials since κ_h\kappa\_h (resp. κ_e\kappa\_e) is bounded by a polynomial in dd with high probability if the a_ka\_k (resp. the b_kb\_k) are independent, centered Gaussian variables of variance 11

    TR-2013009: Algebraic Algorithms

    Full text link

    On Polynomial Roots Approximation Via Dominant Eigenspaces And Isolation Of Real Roots

    Full text link
    Finding the roots of a given polynomial is a very old and noble problem in mathematics and computational mathematics. For about 4,000 years, various approaches had been proposed to solve this problem (see cite{FC99}). In 1824, Niels Abel showed that there existed polynomials of degree five, whose roots could not be expressed using radicals and arithmetic operations through their coefficients. Here is an example of such polynomials:x54x2.x^5-4x-2. Thus we must resort to iterative methods to approximate the roots of a polynomial given with its coefficients. There are many algorithms that approximate the roots of a polynomial(see cite{B40}, cite{B68}, cite{MN93}, cite{MN97}, cite{MN99}, cite{MN02}, cite{MN07}). As important examples we cite Quadtree (Weyl\u27s) Construction and Newton\u27s Iteration (see cite{P00a}). Some of the algorithms have as their goal to output a single root, for example, the absolutely largest root. Some other algorithms aim to output a subset of all the roots of the given polynomial, for example, all the roots within a fixed region on the complex plane. In many applications (e.g., algebraic geometric optimization), only the real roots are of interest, and they can be much less numerous than all the roots of the polynomial (see cite{MP13}). Nevertheless, the best numerical subroutines, such as MPSolve 2.0 cite{BF00}, Eigensolve cite{F02}, and MPsolve 3.0 cite{BR14}, approximate all real roots about as fast and as slow as all complex roots. The purpose of this thesis is to find real roots of a given polynomial effectively and quickly, this is accomplished by separating real roots from the other roots of the given polynomial and by finding roots which are clustered and absolutely dominant. We use matrix functions throughout this thesis to achieve this goal. One of the approaches is to approximate the roots of a polynomial p(x)p(x) by approximating the eigenvalues of its associated companion matrix CpC_{p}. This takes advantage of using the well-known numerical matrix methods for the eigenvalues. This dissertation is organized as follows. Chapter 1 is devoted to brief history and modern applications of Polynomial root-finding, definitions, preliminary results, basic theorems, and randomized matrix computations. In Chapter 2, we present our Basic Algorithms and combine them with repeated squaring to approximate the absolutely largest roots as well as the roots closest to a selected complex point. We recall the matrix sign function and apply it to eigen-solving. We cover its computation and adjust it to real eigen-solving. In Chapter 3, we present a matrix free algorithm to isolate and approximate real roots of a given polynomial. We use a Cayley map followed by Dandelin\u27s (Lobachevsky\u27s, Gräffe\u27s) iteration. This is in part based on the fact that we have at hand good and efficient algorithms to approximate roots of a polynomial having only real roots (for instance the modified Laguerre\u27s algorithm of [DJLZ97]). The idea is to extract (approximately) from the image of the given polynomial (via compositions of rational functions) a factor whose roots are all real, which can be solved using modified Laguerre\u27s algorithm, so we can output good approximations of the real roots of the given polynomial. In Chapter 4, we present an algorithm based on a matrix version of the Cayley map used in Chapter 3. As our input, we consider the companion matrix of a given polynomial. The Cayley map and selected rational functions are treated as matrix functions. Via composition of matrix functions we generate and approximate the eigenspace associated with the real eigenvalues of the companion matrix, and then we readily approximate the real eigenvalues of the companion matrix of the given polynomial. To simplify the algorithm and to avoid numerical issues appearing in computation of the high powers of matrices, we use factorization of PkPkP^{k}-P^{-k} as the product prodi=0k1(PomegakiP1)prod_{i=0}^{k-1}(P-omega_k^iP^{-1}) where omegak=exp(2pisqrt1/k)omega_k=exp(2pisqrt {-1}/k) is a primitive kkth root of unity

    Function approximation for option pricing and risk management Methods, theory and applications.

    Get PDF
    PhD Thesis.This thesis investigates the application of function approximation techniques for computationally demanding problems in nance. We focus on the use of Chebyshev interpolation and its multivariate extensions. The main contribution of this thesis is the development of a new pricing method for path-dependent options. In each step of the dynamic programming time-stepping we approximate the value function with Chebyshev polynomials. A key advantage of this approach is that it allows us to shift all modeldependent computations into a pre-computation step. For each time step the method delivers a closed form approximation of the price function along with the options' delta and gamma. We provide a theoretical error analysis and nd conditions that imply explicit error bounds. Numerical experiments con rm the fast convergence of prices and sensitivities. We use the new method to calculate credit exposures of European and path-dependent options for pricing and risk management. The simple structure of the Chebyshev interpolation allows for a highly e cient evaluation of the exposures. We validate the accuracy of the computed exposure pro les numerically for di erent equity products and a Bermudan swaption. Benchmarking against the least-squares Monte Carlo approach shows that our method delivers a higher accuracy in a faster runtime. We extend the method to e ciently price early-exercise options depending on several risk-factors. As an example, we consider the pricing of callable bonds in a hybrid twofactor model. We develop an e cient and stable calibration routine for the model based on our new pricing method. Moreover, we consider the pricing of early-exercise basket options in a multivariate Black-Scholes model. We propose a numerical smoothing in the dynamic programming time-stepping using the smoothing property of a Gaussian kernel. An extensive numerical convergence analysis con rms the e ciency

    A Multivariate Weierstrass Iterative Rootfinder

    No full text
    We propose an algorithm to compute simultaneously all the solutions of an algebraic system (of n equations in n variables) that dene a zero-dimentional variety. This new approach generalises the univariate Weierstrass's method. We study the arithmetic complexity of this method that has a quadratic convergence in a neighbourhood of the solutions. Hereafter, we describe a method based on the iteration function of the multivariate Weierstrass's method and on the continuation method for computing the roots of polynomial systems. Finally we describe some numerical experiments of those methods
    corecore