9 research outputs found
Over-constrained Weierstrass iteration and the nearest consistent system
We propose a generalization of the Weierstrass iteration for over-constrained
systems of equations and we prove that the proposed method is the Gauss-Newton
iteration to find the nearest system which has at least common roots and
which is obtained via a perturbation of prescribed structure. In the univariate
case we show the connection of our method to the optimization problem
formulated by Karmarkar and Lakshman for the nearest GCD. In the multivariate
case we generalize the expressions of Karmarkar and Lakshman, and give
explicitly several iteration functions to compute the optimum.
The arithmetic complexity of the iterations is detailed
Fast real and complex root-finding methods for well-conditioned polynomials
Given a polynomial of degree and a bound on a condition
number of , we present the first root-finding algorithms that return all its
real and complex roots with a number of bit operations quasi-linear in . More precisely, several condition numbers can be defined
depending on the norm chosen on the coefficients of the polynomial. Let . We call the
condition number associated with a perturbation of the the hyperbolic
condition number , and the one associated with a perturbation of the
the elliptic condition number . For each of these condition
numbers, we present algorithms that find the real and the complex roots of
in bit
operations.Our algorithms are well suited for random polynomials since
(resp. ) is bounded by a polynomial in with high
probability if the (resp. the ) are independent, centered Gaussian
variables of variance
On Polynomial Roots Approximation Via Dominant Eigenspaces And Isolation Of Real Roots
Finding the roots of a given polynomial is a very old and noble problem in mathematics and computational mathematics. For about 4,000 years, various approaches had been proposed to solve this problem (see cite{FC99}). In 1824, Niels Abel showed that there existed polynomials of degree five, whose roots could not be expressed using radicals and arithmetic operations through their coefficients. Here is an example of such polynomials:
Thus we must resort to iterative methods to approximate the roots of a polynomial given with its coefficients.
There are many algorithms that approximate the roots of a polynomial(see cite{B40}, cite{B68}, cite{MN93}, cite{MN97}, cite{MN99}, cite{MN02}, cite{MN07}). As important examples we cite Quadtree (Weyl\u27s) Construction and Newton\u27s Iteration (see cite{P00a}). Some of the algorithms have as their goal to output a single root, for example, the absolutely largest root. Some other algorithms aim to output a subset of all the roots of the given polynomial, for example, all the roots within a fixed region on the complex plane. In many applications (e.g., algebraic geometric optimization), only the real roots are of interest, and they can be much less numerous than all the roots of the polynomial (see cite{MP13}). Nevertheless, the best numerical subroutines, such as MPSolve 2.0 cite{BF00}, Eigensolve cite{F02}, and MPsolve 3.0 cite{BR14}, approximate all real roots about as fast and as slow as all complex roots.
The purpose of this thesis is to find real roots of a given polynomial effectively and quickly, this is accomplished by separating real roots from the other roots of the given polynomial and by finding roots which are clustered and absolutely dominant. We use matrix functions throughout this thesis to achieve this goal.
One of the approaches is to approximate the roots of a polynomial by approximating the eigenvalues of its associated companion matrix . This takes advantage of using the well-known numerical matrix methods for the eigenvalues.
This dissertation is organized as follows.
Chapter 1 is devoted to brief history and modern applications of Polynomial root-finding, definitions, preliminary results, basic theorems, and randomized matrix computations.
In Chapter 2, we present our Basic Algorithms and combine them with repeated squaring to approximate the absolutely largest roots as well as the roots closest to a selected complex point. We recall the matrix sign function and apply it to eigen-solving. We cover its computation and adjust it to real eigen-solving.
In Chapter 3, we present a matrix free algorithm to isolate and approximate real roots of a given polynomial. We use a Cayley map followed by Dandelin\u27s (Lobachevsky\u27s, Gräffe\u27s) iteration. This is in part based on the fact that we have at hand good and efficient algorithms to approximate roots of a polynomial having only real roots (for instance the modified Laguerre\u27s algorithm of [DJLZ97]). The idea is to extract (approximately) from the image of the given polynomial (via compositions of rational functions) a factor whose roots are all real, which can be solved using modified Laguerre\u27s algorithm, so we can output good approximations of the real roots of the given polynomial.
In Chapter 4, we present an algorithm based on a matrix version of the Cayley map used in Chapter 3. As our input, we consider the companion matrix of a given polynomial. The Cayley map and selected rational functions are treated as matrix functions. Via composition of matrix functions we generate and approximate the eigenspace associated with the real eigenvalues of the companion matrix, and then we readily approximate the real eigenvalues of the companion matrix of the given polynomial.
To simplify the algorithm and to avoid numerical issues appearing in computation of the high powers of matrices, we use factorization of as the product where is a primitive th root of unity
Function approximation for option pricing and risk management Methods, theory and applications.
PhD Thesis.This thesis investigates the application of function approximation techniques for computationally
demanding problems in nance. We focus on the use of Chebyshev interpolation
and its multivariate extensions. The main contribution of this thesis is the
development of a new pricing method for path-dependent options. In each step of the
dynamic programming time-stepping we approximate the value function with Chebyshev
polynomials. A key advantage of this approach is that it allows us to shift all modeldependent
computations into a pre-computation step. For each time step the method
delivers a closed form approximation of the price function along with the options' delta
and gamma. We provide a theoretical error analysis and nd conditions that imply explicit
error bounds. Numerical experiments con rm the fast convergence of prices and
sensitivities. We use the new method to calculate credit exposures of European and
path-dependent options for pricing and risk management. The simple structure of the
Chebyshev interpolation allows for a highly e cient evaluation of the exposures. We
validate the accuracy of the computed exposure pro les numerically for di erent equity
products and a Bermudan swaption. Benchmarking against the least-squares Monte
Carlo approach shows that our method delivers a higher accuracy in a faster runtime.
We extend the method to e ciently price early-exercise options depending on several
risk-factors. As an example, we consider the pricing of callable bonds in a hybrid twofactor
model. We develop an e cient and stable calibration routine for the model based
on our new pricing method. Moreover, we consider the pricing of early-exercise basket
options in a multivariate Black-Scholes model. We propose a numerical smoothing in
the dynamic programming time-stepping using the smoothing property of a Gaussian
kernel. An extensive numerical convergence analysis con rms the e ciency
A Multivariate Weierstrass Iterative Rootfinder
We propose an algorithm to compute simultaneously all the solutions of an algebraic system (of n equations in n variables) that dene a zero-dimentional variety. This new approach generalises the univariate Weierstrass's method. We study the arithmetic complexity of this method that has a quadratic convergence in a neighbourhood of the solutions. Hereafter, we describe a method based on the iteration function of the multivariate Weierstrass's method and on the continuation method for computing the roots of polynomial systems. Finally we describe some numerical experiments of those methods