14 research outputs found
A primal-dual formulation for certifiable computations in Schubert calculus
Formulating a Schubert problem as the solutions to a system of equations in
either Pl\"ucker space or in the local coordinates of a Schubert cell typically
involves more equations than variables. We present a novel primal-dual
formulation of any Schubert problem on a Grassmannian or flag manifold as a
system of bilinear equations with the same number of equations as variables.
This formulation enables numerical computations in the Schubert calculus to be
certified using algorithms based on Smale's \alpha-theory.Comment: 21 page
The complexity and geometry of numerically solving polynomial systems
These pages contain a short overview on the state of the art of efficient
numerical analysis methods that solve systems of multivariate polynomial
equations. We focus on the work of Steve Smale who initiated this research
framework, and on the collaboration between Stephen Smale and Michael Shub,
which set the foundations of this approach to polynomial system--solving,
culminating in the more recent advances of Carlos Beltran, Luis Miguel Pardo,
Peter Buergisser and Felipe Cucker
A stable, polynomial-time algorithm for the eigenpair problem
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.We describe algorithms for computing eigenpairs (eigenvalue-eigenvector pairs) of a complex n×n matrix A. These algorithms are numerically stable, strongly accurate, and theoretically efficient (i.e., polynomial-time). We do not believe they outperform in practice the algorithms currently used for this computational problem. The merit of our paper is to give a positive answer to a long-standing open problem in numerical linear algebra.DFG, BU 1371/2-2, Geglättete Analyse von Konditionszahle
Complexity of Sparse Polynomial Solving 2: Renormalization
Renormalized homotopy continuation on toric varieties is introduced as a tool
for solving sparse systems of polynomial equations, or sparse systems of
exponential sums. The cost of continuation depends on a renormalized condition
length, defined as a line integral of the condition number along all the lifted
renormalized paths.
The theory developed in this paper leads to a continuation algorithm tracking
all the solutions between two generic systems with the same structure. The
algorithm is randomized, in the sense that it follows a random path between the
two systems. The probability of success is one. In order to produce an expected
cost bound, several invariants depending solely of the supports of the
equations are introduced. For instance, the mixed area is a quermassintegral
that generalizes surface area in the same way that mixed volume generalizes
ordinary volume. The facet gap measures for each direction in the 0-fan, how
close is the supporting hyperplane to the nearest vertex. Once the supports are
fixed, the expected cost depends on the input coefficients solely through two
invariants: the renormalized toric condition number and the imbalance of the
absolute values of the coefficients. This leads to a non-uniform complexity
bound for polynomial solving in terms of those two invariants. Up to
logarithms, the expected cost is quadratic in the first invariant and linear in
the last one.Comment: 90 pages. Major revision from the previous versio