26 research outputs found

    Minimizing Rational Functions by Exact Jacobian SDP Relaxation Applicable to Finite Singularities

    Full text link
    This paper considers the optimization problem of minimizing a rational function. We reformulate this problem as polynomial optimization by the technique of homogenization. These two problems are shown to be equivalent under some generic conditions. The exact Jacobian SDP relaxation method proposed by Nie is used to solve the resulting polynomial optimization. We also prove that the assumption of nonsingularity in Nie's method can be weakened as the finiteness of singularities. Some numerical examples are given to illustrate the efficiency of our method.Comment: 23 page

    A Subdivision Method for Computing Nearest Gcd with Certification

    Get PDF
    International audienceA new subdivision method for computing the nearest univariate gcd is described and analyzed. It is based on an exclusion test and an inclusion test. The xclusion test in a cell exploits Taylor expansion of the polynomial at the center of the cell. The inclusion test uses Smale's alpha-theorems to certify the existence and unicity of a solution in a cell. Under the condition of simple roots for the distance minimization problem, we analyze the complexity of the algorithm in terms of a condition number, which is the inverse of the distance to the set of degenerate systems. We report on some experimentation on representative examples to illustrate the behavior of the algorithm

    Computing Nearest Gcd with Certification

    Get PDF
    International audienceA bisection method, based on exclusion and inclusion tests, is used to address the nearest univariate gcd problem formulated as a bivariate real minimization problem of a rational fraction. The paper presents an algorithm, a first implementation and a complexity analysis relying on Smale's α\alpha-theory. We report its behavior on an illustrative example

    Computing Nearest Gcd with Certification

    Get PDF
    International audienceA bisection method, based on exclusion and inclusion tests, is used to address the nearest univariate gcd problem formulated as a bivariate real minimization problem of a rational fraction. The paper presents an algorithm, a first implementation and a complexity analysis relying on Smale's α\alpha-theory. We report its behavior on an illustrative example

    Rational Optimization using Sum-of-Squares Techniques

    Get PDF
    Motivated by many control applications, this paper deals with the global solutions of unconstrained optimization problems. First, a simple SOS method is presented to find the infimum of a polynomial, which can be handled efficiently using the relevant software tools. The main idea of this method is to introduce a perturbation variable whose approaching to zero results in a solution with any arbitrary precision. The proposed technique is then extended to the case of rational functions. The primary advantages of this approach over the existing ones are its simplicity and capability of treating problems for which the existing methods are not efficient, as demonstrated in three numerical examples

    Computing Approximate GCRDs of Differential Polynomials

    Get PDF
    We generalize the approximate greatest common divisor problem to the non-commutative, approximate Greatest Common Right Divisor (GCRD) problem of differential polynomials. Algorithms for performing arithmetic on approximate differential polynomials are presented along with certification results and the corresponding number of flops required. Under reasonable assumptions the approximate GCRD problem is well posed. In particular, we show that an approximate GCRD exists under these assumptions and provide counter examples when these assumptions are not satisfied. We introduce algorithms for computing nearby differential polynomials with a GCRD. These differential polynomials are improved through a post-refinement Newton iteration. It is shown that Newton iteration will converge to a unique, optimal solution when the residual is sufficiently small. Furthermore, if our computed solution is not optimal, it is shown that this solution is reasonably close to the optimal solution

    Matrix Polynomials and their Lower Rank Approximations

    Get PDF
    This thesis is a wide ranging work on computing a “lower-rank” approximation of a matrix polynomial using second-order non-linear optimization techniques. Two notions of rank are investigated. The first is the rank as the number of linearly independent rows or columns, which is the classical definition. The other notion considered is the lowest rank of a matrix polynomial when evaluated at a complex number, or the McCoy rank. Together, these two notions of rank allow one to compute a nearby matrix polynomial where the structure of both the left and right kernels is prescribed, along with the structure of both the infinite and finite eigenvalues. The computational theory of the calculus of matrix polynomial valued functions is developed and used in optimization algorithms based on second-order approximations. Special functions studied with a detailed error analysis are the determinant and adjoint of matrix polynomials. The unstructured and structured variants of matrix polynomials are studied in a very general setting in the context of an equality constrained optimization problem. The most general instances of these optimization problems are NP hard to approximate solutions to in a global setting. In most instances we are able to prove that solutions to our optimization problems exist (possibly at infinity) and discuss techniques in conjunction with an implementation to compute local minimizers to the problem. Most of the analysis of these problems is local and done through the Karush-Kuhn-Tucker optimality conditions for constrained optimization problems. We show that most formulations of the problems studied satisfy regularity conditions and admit Lagrange multipliers. Furthermore, we show that under some formulations that the second-order sufficient condition holds for instances of interest of the optimization problems in question. When Lagrange multipliers do not exist, we discuss why, and if it is reasonable to do so, how to regularize the problem. In several instances closed form expressions for the derivatives of matrix polynomial valued functions are derived to assist in analysis of the optimality conditions around a solution. From this analysis it is shown that variants of Newton’s method will have a local rate of convergence that is quadratic with a suitable initial guess for many problems. The implementations are demonstrated on some examples from the literature and several examples are cross-validated with different optimization formulations of the same mathematical problem. We conclude with a special application of the theory developed in this thesis is computing a nearby pair of differential polynomials with a non-trivial greatest common divisor, a non-commutative symbolic-numeric computation problem. We formulate this problem as finding a nearby structured matrix polynomial that is rank deficient in the classical sense

    An EM Approach for Time-Variant Poisson-Gaussian Model Parameter Estimation

    Get PDF
    International audienceThe problem of estimating the parameters of a Poisson-Gaussian model from experimental data has recently raised much interest in various applications, for instance in confocal fluorescence microscopy. In this context, a field of independent random variables is observed, which is varying both in time and space. Each variable is a sum of two components, one following a Poisson and the other a Gaussian distribution. In this paper, a general formulation is considered where the associated Poisson process is nonstationary in space and also exhibits an exponential decay in time, whereas the Gaussian component corresponds to a stationary white noise with arbitrary mean. To solve the considered parametric estimation problem, we follow an iterative Expectation-Maximization (EM) approach. The parameter update equations involve deriving finite approximation of infinite sums. Expressions for the maximum error incurred in the process are also given. Since the problem is non-convex, we pay attention to the EM initialization, using a moment-based method where recent optimization tools come into play. We carry out a performance analysis by computing the Cramer-Rao bounds on the estimated variables. The practical performance of the proposed estimation procedure is illustrated on both synthetic data and real fluorescence macroscopy image sequences. The algorithm is shown to provide reliable estimates of the mean/variance of the Gaussian noise and of the scale parameter of the Poisson component, as well as of its exponential decay rate. In particular, the mean estimate of the Poisson component can be interpreted as a good-quality denoised version of the data
    corecore