164 research outputs found

    Ultimate Polynomial Time

    Full text link
    The class UP\mathcal{UP} of `ultimate polynomial time' problems over C\mathbb C is introduced; it contains the class P\mathcal P of polynomial time problems over C\mathbb C. The τ\tau-Conjecture for polynomials implies that UP\mathcal{UP} does not contain the class of non-deterministic polynomial time problems definable without constants over C\mathbb C. This latest statement implies that P≠NP\mathcal P \ne \mathcal{NP} over C\mathbb C. A notion of `ultimate complexity' of a problem is suggested. It provides lower bounds for the complexity of structured problems

    Computing Multi-Homogeneous Bezout Numbers is Hard

    Full text link
    The multi-homogeneous Bezout number is a bound for the number of solutions of a system of multi-homogeneous polynomial equations, in a suitable product of projective spaces. Given an arbitrary, not necessarily multi-homogeneous system, one can ask for the optimal multi-homogenization that would minimize the Bezout number. In this paper, it is proved that the problem of computing, or even estimating the optimal multi-homogeneous Bezout number is actually NP-hard. In terms of approximation theory for combinatorial optimization, the problem of computing the best multi-homogeneous structure does not belong to APX, unless P = NP. Moreover, polynomial time algorithms for estimating the minimal multi-homogeneous Bezout number up to a fixed factor cannot exist even in a randomized setting, unless BPP contains NP

    Tangent Graeffe Iteration

    Full text link
    Graeffe iteration was the choice algorithm for solving univariate polynomials in the XIX-th and early XX-th century. In this paper, a new variation of Graeffe iteration is given, suitable to IEEE floating-point arithmetics of modern digital computers. We prove that under a certain generic assumption the proposed algorithm converges. We also estimate the error after N iterations and the running cost. The main ideas from which this algorithm is built are: classical Graeffe iteration and Newton Diagrams, changes of scale (renormalization), and replacement of a difference technique by a differentiation one. The algorithm was implemented successfully and a number of numerical experiments are displayed

    On the Curvature of the Central Path of Linear Programming Theory

    Full text link
    We prove a linear bound on the average total curvature of the central path of linear programming theory in terms on the number of independent variables of the primal problem, and independent on the number of constraints.Comment: 24 pages. This is a fully revised version, and the last section of the paper was rewritten, for clarit

    Adaptative Step Size Selection for Homotopy Methods to Solve Polynomial Equations

    Full text link
    Given a C^1 path of systems of homogeneous polynomial equations f_t, t in [a,b] and an approximation x_a to a zero zeta_a of the initial system f_a, we show how to adaptively choose the step size for a Newton based homotopy method so that we approximate the lifted path (f_t,zeta_t) in the space of (problems, solutions) pairs. The total number of Newton iterations is bounded in terms of the length of the lifted path in the condition metric

    Newton Method on Riemannian Manifolds: Covariant Alpha-Theory

    Full text link
    In this paper we study quantitative aspects of Newton method for finding zeros of mappings f: M_n -> R^n and vector fields X: M_x -> TM_
    • …
    corecore