52 research outputs found

    Singular and Ill-Conditioned Linear Systems

    Get PDF

    Newton’s Iteration at Nonisoated Solutions

    Get PDF
    Newton’s iteration is arguably the most important and fundamental method for solving systems of nonlinear equations with a long and fascinating history. Its local quadratic convergence to isolated nonsingular solutions is well documented in the literature. It is also well-known that the iteration loses its quadratic rate of convergence, if it applies and converges at all, to nonisolated solutions with a dismal attainable accuracy in numerical computation. Even though solving systems of nonlinear equations is a standard topic in textbooks of numerical analysis, the elaboration has always been limited to isolated solutions. Models with nonisolated solutions frequently arise in applications. To avoid the difficulties on nonisolated solutions, scientific computing practioners routinely go to great lengths to isolate the solutions by introducing arbitrary auxiliary equations and variables. In this project, we formulate a notion of regularity for nonisolated solutions, establish a novel extension of Newton’s iteration for such solutions and prove its local quadratic convergence on exact equations along with local linear convergence on perturbed equations with empirical data. Furthermore, we provide a geometric interpretation of the convergence tendency, elaborate the modeling and applications involving nonisolated solutions, and demonstrate a software implementation with computing examples. Nonisolated solutions of a nonlinear system of equations present a challenge in numerical computation. Moreover, they can be highly sensitive to data perturbations. When the system is perturbed, represented with empirical data or solved using floating point arithmetic, the nonisolated solution can be substantially altered or even disappear altogether. However, the proposed extension of Newton’s iteration still converges to a stationary point that approximates an exact solution of the underlying system with accuracy in the same order of the data error. In other words, the new version of the Newton’s iteration also serves as a regularization mechanism for such an ill-posed zero-finding problem. Newton’s iteration evolves throughout the history tracing back to Babylonians with contributions from François Viète, Joseph Raphson, Thomas Simpson and many others. Replacing the inverse of the Jacobian matrix with certain types of generalized inverse started from Gauss for overdetermined systems. Calling the method “Newton’s iteration” is considered an “enduring myth” by historians. We shall also elaborate the development and extensions of Newton’s iteration in history

    Geometric modeling and regularization of algebraic problems

    Full text link
    Discontinuity with respect to data perturbations is common in algebraic computation where solutions are often highly sensitive. Such problems can be modeled as solving systems of equations at given data parameters. By appending auxiliary equations, the models can be formulated to satisfy four easily verifiable conditions so that the data form complex analytic manifolds on which the solutions maintain their structures and the Lipschitz continuity. When such a problem is given with empirical data, solving the system becomes a least squares problem whose solution uniquely exists and enjoys Lipschitz continuity as long as the data point is in a tubular neighborhood of the manifold. As a result, the singular problem is regularized as a well-posed computational problem

    The closedness subspace method for computing the multiplicity structure of a polynomial system

    No full text
    The multiplicity structure of a polynomial system at an isolated zero is identified with the dual space consisting of differential functionals vanishing on the entire polynomial ideal. Algorithms have been developed for computing dual spaces as the kernels of Macaulay matrices. These previous algorithms face a formidable obstacle in handling Macaulay matrices whose dimensions grow rapidly when the problem size and the order of the differential functionals increase. This paper presents a new algorithm based on the closedness subspace strategy that substantially reduces the matrix sizes in computing the dual spaces, enabling the computation of multiplicity structures for large problems. Comparisons of timings and memory requirements demonstrate a substantial improvement in computational efficiency

    A Newton's Iteration Converges Quadratically to Nonisolated Solutions Too

    Full text link
    The textbook Newton's iteration is practically inapplicable on nonisolated solutions of unregularized nonlinear systems. With a simple modification, a version of Newton's iteration regains its local quadratic convergence to nonisolated zeros of smooth mappings assuming the solutions are semiregular as properly defined regardless of whether the system is square, underdetermined or overdetermined. Furthermore, the iteration serves as a de facto regularization mechanism for computing singular zeros from empirical data. Even if the given system is perturbed so that the nonisolated solution disappears, the iteration still locally converges to a stationary point that approximates a solution of the underlying system with an error bound in the same order of the data accuracy. Geometrically, the iteration approximately converges to the nearest point on the solution manifold. This extension simplifies nonlinear system modeling by eliminating the zero isolation process and enables a wide range of applications in algebraic computation

    Sensitivity and computation of a defective eigenvalue

    No full text
    A defective eigenvalue is well documented to be hypersensitive to data perturbations and round-oif errors, making it a formidable challenge in numerical computation particularly when the matrix is known through approximate data. This paper establishes a finitely bounded sensitivity of a defective eigenvalue with respect to perturbations that preserve the geometric multiplicity and the smallest Jordan block size. Based on this perturbation theory, numerical computation of a defective eigenvalue is regularized as a well-posed least squares problem so that it can be accurately carried out using floating point arithmetic even if the matrix is perturbed

    A method computing multiple roots of inexact polynomials

    No full text
    • …
    corecore