108 research outputs found

    Structured Low Rank Approximation of a Bezout Matrix

    Full text link

    Determining Critical Points of Handwritten Mathematical Symbols Represented as Parametric Curves

    Get PDF
    We consider the problem of computing critical points of plane curves represented in a finite orthogonal polynomial basis. This is motivated by an approach to the recognition of hand-written mathematical symbols in which the initial data is in such an orthogonal basis and it is desired to avoid ill-conditioned basis conversions. Our main contribution is to assemble the relevant mathematical tools to perform all the necessary operations in the orthogonal polynomial basis. These include implicitization, differentiation, root finding and resultant computation

    Sample Complexity of the Robust LQG Regulator with Coprime Factors Uncertainty

    Full text link
    This paper addresses the end-to-end sample complexity bound for learning the H2 optimal controller (the Linear Quadratic Gaussian (LQG) problem) with unknown dynamics, for potentially unstable Linear Time Invariant (LTI) systems. The robust LQG synthesis procedure is performed by considering bounded additive model uncertainty on the coprime factors of the plant. The closed-loop identification of the nominal model of the true plant is performed by constructing a Hankel-like matrix from a single time-series of noisy finite length input-output data, using the ordinary least squares algorithm from Sarkar et al. (2020). Next, an H-infinity bound on the estimated model error is provided and the robust controller is designed via convex optimization, much in the spirit of Boczar et al. (2018) and Zheng et al. (2020a), while allowing for bounded additive uncertainty on the coprime factors of the model. Our conclusions are consistent with previous results on learning the LQG and LQR controllers.Comment: Minor Edits on closed loop identification, 30 pages, 2 figures, 3 algorithm

    Structured Sparsity Models for Multiparty Speech Recovery from Reverberant Recordings

    Get PDF
    We tackle the multi-party speech recovery problem through modeling the acoustic of the reverberant chambers. Our approach exploits structured sparsity models to perform room modeling and speech recovery. We propose a scheme for characterizing the room acoustic from the unknown competing speech sources relying on localization of the early images of the speakers by sparse approximation of the spatial spectra of the virtual sources in a free-space model. The images are then clustered exploiting the low-rank structure of the spectro-temporal components belonging to each source. This enables us to identify the early support of the room impulse response function and its unique map to the room geometry. To further tackle the ambiguity of the reflection ratios, we propose a novel formulation of the reverberation model and estimate the absorption coefficients through a convex optimization exploiting joint sparsity model formulated upon spatio-spectral sparsity of concurrent speech representation. The acoustic parameters are then incorporated for separating individual speech signals through either structured sparse recovery or inverse filtering the acoustic channels. The experiments conducted on real data recordings demonstrate the effectiveness of the proposed approach for multi-party speech recovery and recognition.Comment: 31 page

    Robustness of feedback stabilization : a topological approach

    Get PDF

    Linear Control Theory with an ℋ∞ Optimality Criterion

    Get PDF
    This expository paper sets out the principal results in ℋ∞ control theory in the context of continuous-time linear systems. The focus is on the mathematical theory rather than computational methods

    Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix

    Get PDF
    We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n x n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension n and degree d. If the latter multiplication is done in MM(n,d)=softO(n^omega d) operations, with omega the exponent of matrix multiplication over K, then the algorithm uses softO(MM(n,d)) operations in K. The softO notation indicates some missing logarithmic factors. The method is randomized with Las Vegas certification. We achieve our results in part through a combination of matrix Hensel high-order lifting and matrix minimal fraction reconstruction, and through the computation of minimal or small degree vectors in the nullspace seen as a K[x]-moduleComment: Research Report LIP RR2005-03, January 200
    • …
    corecore