198 research outputs found
Computing GCRDs of Approximate Differential Polynomials
Differential (Ore) type polynomials with approximate polynomial coefficients
are introduced. These provide a useful representation of approximate
differential operators with a strong algebraic structure, which has been used
successfully in the exact, symbolic, setting. We then present an algorithm for
the approximate Greatest Common Right Divisor (GCRD) of two approximate
differential polynomials, which intuitively is the differential operator whose
solutions are those common to the two inputs operators. More formally, given
approximate differential polynomials and , we show how to find "nearby"
polynomials and which have a non-trivial GCRD.
Here "nearby" is under a suitably defined norm. The algorithm is a
generalization of the SVD-based method of Corless et al. (1995) for the
approximate GCD of regular polynomials. We work on an appropriately
"linearized" differential Sylvester matrix, to which we apply a block SVD. The
algorithm has been implemented in Maple and a demonstration of its robustness
is presented.Comment: To appear, Workshop on Symbolic-Numeric Computing (SNC'14) July 201
Recommended from our members
Nearest common root of a set of polynomials: A structured singular value approach
The paper considers the problem of calculating the nearest common root of a polynomial set under perturbations in their coefficients. In particular, we seek the minimum-magnitude perturbation in the coefficients of the polynomial set such that the perturbed polynomials have a common root. It is shown that the problem is equivalent to the solution of a structured singular value (μ) problem arising in robust control for which numerous techniques are available. It is also shown that the method can be extended to the calculation of an “approximate GCD” of fixed degree by introducing the notion of the generalized structured singular value of a matrix. The work generalizes previous results by the authors involving the calculation of the “approximate GCD” of two polynomials, although the general case considered here is considerably harder and relies on a matrix-dilation approach and several preliminary transformations
Computing Dynamic Output Feedback Laws
The pole placement problem asks to find laws to feed the output of a plant
governed by a linear system of differential equations back to the input of the
plant so that the resulting closed-loop system has a desired set of
eigenvalues. Converting this problem into a question of enumerative geometry,
efficient numerical homotopy algorithms to solve this problem for general
Multi-Input-Multi-Output (MIMO) systems have been proposed recently. While
dynamic feedback laws offer a wider range of use, the realization of the output
of the numerical homotopies as a machine to control the plant in the time
domain has not been addressed before. In this paper we present symbolic-numeric
algorithms to turn the solution to the question of enumerative geometry into a
useful control feedback machine. We report on numerical experiments with our
publicly available software and illustrate its application on various control
problems from the literature.Comment: 20 pages, 3 figures; the software described in this paper is publicly
available via http://www.math.uic.edu/~jan/download.htm
Over-constrained Weierstrass iteration and the nearest consistent system
We propose a generalization of the Weierstrass iteration for over-constrained
systems of equations and we prove that the proposed method is the Gauss-Newton
iteration to find the nearest system which has at least common roots and
which is obtained via a perturbation of prescribed structure. In the univariate
case we show the connection of our method to the optimization problem
formulated by Karmarkar and Lakshman for the nearest GCD. In the multivariate
case we generalize the expressions of Karmarkar and Lakshman, and give
explicitly several iteration functions to compute the optimum.
The arithmetic complexity of the iterations is detailed
Recommended from our members
Approximate zero polynomials of polynomial matrices and linear systems
This paper introduces the notions of approximate and optimal approximate zero polynomial of a polynomial matrix by deploying recent results on the approximate GCD of a set of polynomials Karcaniaset al. (2006) 1 and the exterior algebra Karcanias and Giannakopoulos (1984) 4 representation of polynomial matrices. The results provide a new definition for the "approximate", or "almost" zeros of polynomial matrices and provide the means for computing the distance from non-coprimeness of a polynomial matrix. The computational framework is expressed as a distance problem in a projective space. The general framework defined for polynomial matrices provides a new characterization of approximate zeros and decoupling zeros Karcanias et al. (1983) 2 and Karcanias and Giannakopoulos (1984) 4 of linear systems and a process leading to computation of their optimal versions. The use of restriction pencils provides the means for defining the distance of state feedback (output injection) orbits from uncontrollable (unobservable) families of systems, as well as the invariant versions of the "approximate decoupling polynomials". The overall framework that is introduced provides the means for introducing measures for the distance of a system from different families of uncontrollable, or unobservable systems, which may be feedback dependent, or feedback invariant as well as the notion of "approximate decoupling polynomials"
GPGCD: An iterative method for calculating approximate GCD of univariate polynomials
We present an iterative algorithm for calculating approximate greatest common
divisor (GCD) of univariate polynomials with the real or the complex
coefficients. For a given pair of polynomials and a degree, our algorithm finds
a pair of polynomials which has a GCD of the given degree and whose
coefficients are perturbed from those in the original inputs, making the
perturbations as small as possible, along with the GCD. The problem of
approximate GCD is transfered to a constrained minimization problem, then
solved with the so-called modified Newton method, which is a generalization of
the gradient-projection method, by searching the solution iteratively. We
demonstrate that, in some test cases, our algorithm calculates approximate GCD
with perturbations as small as those calculated by a method based on the
structured total least norm (STLN) method and the UVGCD method, while our
method runs significantly faster than theirs by approximately up to 30 or 10
times, respectively, compared with their implementation. We also show that our
algorithm properly handles some ill-conditioned polynomials which have a GCD
with small or large leading coefficient.Comment: Preliminary versions have been presented as
doi:10.1145/1576702.1576750 and arXiv:1007.183
Recommended from our members
An alternating projection algorithm for the “approximate” GCD calculation
In the paper an approach is proposed for calculating the “best” approximate GCD of a set of coprime polynomials. The algorithm is motivated by the factorisation of the Sylvester resultant matrix of polynomial sets with nontrivial GCD. In the (generic) case of coprime polynomial sets considered here the aim is to minimise the norm of the residual error matrix of the inexact factorisation in order to compute the “optimal” approximate GCD. A least-squares alternating projection algorithm is proposed as an alternative to the solution of the corresponding optimisation problem via nonlinear programming techniques. The special structure of the problem in this case, however, means that the algorithm can be reduced to a sequence of standard subspace projections and hence no need arises to compute gradient vectors, Hessian matrices or optimal step-lengths. An estimate of the asymptotic convergence rate of the algorithm is finally established via the inclination of two subspaces
- …