350 research outputs found

    A family of simultaneous zero-finding methods

    Get PDF
    AbstractApplying Hansen-Patrick's formula for solving the single equation f(z) = 0 to a suitable function appearing in the classical Weierstrass' method, two one-parameter families of interation functions for the simultaneous approximation of all simple and multiple zeros of a polynomial are derived. It is shown that all the methods of these families have fourth-order of convergence. Some computational aspects of the proposed methods and numerical examples are given

    Improved estimation of Fourier coefficients for ill-posed inverse problems

    Get PDF
    In this dissertation we present and solve an ill-posed inverse problem which involves reproducing a function f(x) or its Fourier coefficients from the observed values of the function. The observations of the f(x) are made at n equidistant points on the unit interval with p observations being made at each point. The observations are effected by a random error with a known distribution. First of all we present a very simple estimator for the Fourier coefficients of f(x). Then we present an iteration algorithm for improving the estimator for the Fourier coefficients. We show that the improved estimator we use is a simplified and improved version of the Maximum Likelihood Estimator. Second, we introduce the mean squared error (MSE) for the estimators, which is the main measure of estimator performance. We show that a singly iterated estimator has a smaller MSE then a non-iterated estimator and a multiply iterated estimator has a smaller MSE then a singly iterated estimator. We also prove that the errors in estimating the Fourier Coefficients by the singly and multiply improved methods are normally distributed. Third, we prove a theorem showing that as the sample size goes to infinity, the MSE of our estimator asymptotically approaches the theoretical minimum. That shows that our results are theoretically the best possible results. Fourth, we perform simulations which numerically approximate MSE for a given set of f, error distributions, as well as the number of observation points. We approximate the MSE for the non-iterated error coefficient approximation as well as the singly iterated and multiply iterated ones. We show that indeed the MSE decreases with each iteration. We also plot an error histogram in each case showing that the errors are normally distributed. Finally, we look at some ways in which our problem can be expanded. Possible expansions include working on the problem in multiple dimensions, taking measurements of f at random points, or both of the above

    TR-2011003: Partial Fraction Decomposition, Sylvester Matrices, Convolution and Newton\u27s Iteration

    Full text link
    Both Sylvester matrix and convolution are defined by two polynomials. If one of them has small degree, then the associated Sylvester linear system can be solved fast by using its PFD interpretation of the convolution equation. This can immediately simplify the refinement of approximate convolution by means of Newtonā€™s iteration, where we also incorporate the PFD refinement techniques or alternatively least-squares solution of a linear system associated with the convolution. The process is naturally extended to polynomial factorization and root-finding
    • ā€¦
    corecore