32,432 research outputs found

    Regression with Slowly Varying Regressors

    Get PDF
    Slowly varying regressors are asymptotically collinear in linear regression. Usual regression formulae for asymptotic standard errors remain valid but rates of convergence are affected and the limit distribution of the regression coefficients is shown to be one dimensional. Some asymptotic representations of partial sums of slowly varying functions and central limit theorems with slowly varying weights are given that assist in the development of a regression theory. Multivariate regression and polynomial regression with slowly varying functions are considered and shown to be equivalent, up to standardization, to regression on a polynomial in a logarithmic trend. The theory involves second, third and higher order forms of slow variation. Some applications to trend regression are discussed.Asymptotic expansion, collinearity, Karamata representation, slow variation, smooth variation, trend regression

    Decoupling Multivariate Polynomials Using First-Order Information

    Full text link
    We present a method to decompose a set of multivariate real polynomials into linear combinations of univariate polynomials in linear forms of the input variables. The method proceeds by collecting the first-order information of the polynomials in a set of operating points, which is captured by the Jacobian matrix evaluated at the operating points. The polyadic canonical decomposition of the three-way tensor of Jacobian matrices directly returns the unknown linear relations, as well as the necessary information to reconstruct the univariate polynomials. The conditions under which this decoupling procedure works are discussed, and the method is illustrated on several numerical examples

    Dvoretzky type theorems for multivariate polynomials and sections of convex bodies

    Full text link
    In this paper we prove the Gromov--Milman conjecture (the Dvoretzky type theorem) for homogeneous polynomials on Rn\mathbb R^n, and improve bounds on the number n(d,k)n(d,k) in the analogous conjecture for odd degrees dd (this case is known as the Birch theorem) and complex polynomials. We also consider a stronger conjecture on the homogeneous polynomial fields in the canonical bundle over real and complex Grassmannians. This conjecture is much stronger and false in general, but it is proved in the cases of d=2d=2 (for kk's of certain type), odd dd, and the complex Grassmannian (for odd and even dd and any kk). Corollaries for the John ellipsoid of projections or sections of a convex body are deduced from the case d=2d=2 of the polynomial field conjecture

    Symbolic-Numeric Tools for Analytic Combinatorics in Several Variables

    Full text link
    Analytic combinatorics studies the asymptotic behaviour of sequences through the analytic properties of their generating functions. This article provides effective algorithms required for the study of analytic combinatorics in several variables, together with their complexity analyses. Given a multivariate rational function we show how to compute its smooth isolated critical points, with respect to a polynomial map encoding asymptotic behaviour, in complexity singly exponential in the degree of its denominator. We introduce a numerical Kronecker representation for solutions of polynomial systems with rational coefficients and show that it can be used to decide several properties (0 coordinate, equal coordinates, sign conditions for real solutions, and vanishing of a polynomial) in good bit complexity. Among the critical points, those that are minimal---a property governed by inequalities on the moduli of the coordinates---typically determine the dominant asymptotics of the diagonal coefficient sequence. When the Taylor expansion at the origin has all non-negative coefficients (known as the `combinatorial case') and under regularity conditions, we utilize this Kronecker representation to determine probabilistically the minimal critical points in complexity singly exponential in the degree of the denominator, with good control over the exponent in the bit complexity estimate. Generically in the combinatorial case, this allows one to automatically and rigorously determine asymptotics for the diagonal coefficient sequence. Examples obtained with a preliminary implementation show the wide applicability of this approach.Comment: As accepted to proceedings of ISSAC 201

    The wavelet-NARMAX representation : a hybrid model structure combining polynomial models with multiresolution wavelet decompositions

    Get PDF
    A new hybrid model structure combing polynomial models with multiresolution wavelet decompositions is introduced for nonlinear system identification. Polynomial models play an important role in approximation theory, and have been extensively used in linear and nonlinear system identification. Wavelet decompositions, in which the basis functions have the property of localization in both time and frequency, outperform many other approximation schemes and offer a flexible solution for approximating arbitrary functions. Although wavelet representations can approximate even severe nonlinearities in a given signal very well, the advantage of these representations can be lost when wavelets are used to capture linear or low-order nonlinear behaviour in a signal. In order to sufficiently utilise the global property of polynomials and the local property of wavelet representations simultaneously, in this study polynomial models and wavelet decompositions are combined together in a parallel structure to represent nonlinear input-output systems. As a special form of the NARMAX model, this hybrid model structure will be referred to as the WAvelet-NARMAX model, or simply WANARMAX. Generally, such a WANARMAX representation for an input-output system might involve a large number of basis functions and therefore a great number of model terms. Experience reveals that only a small number of these model terms are significant to the system output. A new fast orthogonal least squares algorithm, called the matching pursuit orthogonal least squares (MPOLS) algorithm, is also introduced in this study to determine which terms should be included in the final model

    Certification of inequalities involving transcendental functions: combining SDP and max-plus approximation

    Get PDF
    We consider the problem of certifying an inequality of the form f(x)0f(x)\geq 0, xK\forall x\in K, where ff is a multivariate transcendental function, and KK is a compact semialgebraic set. We introduce a certification method, combining semialgebraic optimization and max-plus approximation. We assume that ff is given by a syntaxic tree, the constituents of which involve semialgebraic operations as well as some transcendental functions like cos\cos, sin\sin, exp\exp, etc. We bound some of these constituents by suprema or infima of quadratic forms (max-plus approximation method, initially introduced in optimal control), leading to semialgebraic optimization problems which we solve by semidefinite relaxations. The max-plus approximation is iteratively refined and combined with branch and bound techniques to reduce the relaxation gap. Illustrative examples of application of this algorithm are provided, explaining how we solved tight inequalities issued from the Flyspeck project (one of the main purposes of which is to certify numerical inequalities used in the proof of the Kepler conjecture by Thomas Hales).Comment: 7 pages, 3 figures, 3 tables, Appears in the Proceedings of the European Control Conference ECC'13, July 17-19, 2013, Zurich, pp. 2244--2250, copyright EUCA 201

    Enhanced LFR-toolbox for MATLAB and LFT-based gain scheduling

    Get PDF
    We describe recent developments and enhancements of the LFR-Toolbox for MATLAB for building LFT-based uncertainty models and for LFT-based gain scheduling. A major development is the new LFT-object definition supporting a large class of uncertainty descriptions: continuous- and discrete-time uncertain models, regular and singular parametric expressions, more general uncertainty blocks (nonlinear, time-varying, etc.). By associating names to uncertainty blocks the reusability of generated LFT-models and the user friendliness of manipulation of LFR-descriptions have been highly increased. Significant enhancements of the computational efficiency and of numerical accuracy have been achieved by employing efficient and numerically robust Fortran implementations of order reduction tools via mex-function interfaces. The new enhancements in conjunction with improved symbolical preprocessing lead generally to a faster generation of LFT-models with significantly lower orders. Scheduled gains can be viewed as LFT-objects. Two techniques for designing such gains are presented. Analysis tools are also considered
    corecore