623 research outputs found

    Interpolatory methods for H\mathcal{H}_\infty model reduction of multi-input/multi-output systems

    Full text link
    We develop here a computationally effective approach for producing high-quality H\mathcal{H}_\infty-approximations to large scale linear dynamical systems having multiple inputs and multiple outputs (MIMO). We extend an approach for H\mathcal{H}_\infty model reduction introduced by Flagg, Beattie, and Gugercin for the single-input/single-output (SISO) setting, which combined ideas originating in interpolatory H2\mathcal{H}_2-optimal model reduction with complex Chebyshev approximation. Retaining this framework, our approach to the MIMO problem has its principal computational cost dominated by (sparse) linear solves, and so it can remain an effective strategy in many large-scale settings. We are able to avoid computationally demanding H\mathcal{H}_\infty norm calculations that are normally required to monitor progress within each optimization cycle through the use of "data-driven" rational approximations that are built upon previously computed function samples. Numerical examples are included that illustrate our approach. We produce high fidelity reduced models having consistently better H\mathcal{H}_\infty performance than models produced via balanced truncation; these models often are as good as (and occasionally better than) models produced using optimal Hankel norm approximation as well. In all cases considered, the method described here produces reduced models at far lower cost than is possible with either balanced truncation or optimal Hankel norm approximation

    Consistent Dynamic Mode Decomposition

    Full text link
    We propose a new method for computing Dynamic Mode Decomposition (DMD) evolution matrices, which we use to analyze dynamical systems. Unlike the majority of existing methods, our approach is based on a variational formulation consisting of data alignment penalty terms and constitutive orthogonality constraints. Our method does not make any assumptions on the structure of the data or their size, and thus it is applicable to a wide range of problems including non-linear scenarios or extremely small observation sets. In addition, our technique is robust to noise that is independent of the dynamics and it does not require input data to be sequential. Our key idea is to introduce a regularization term for the forward and backward dynamics. The obtained minimization problem is solved efficiently using the Alternating Method of Multipliers (ADMM) which requires two Sylvester equation solves per iteration. Our numerical scheme converges empirically and is similar to a provably convergent ADMM scheme. We compare our approach to various state-of-the-art methods on several benchmark dynamical systems

    TR-2012001: Algebraic Algorithms

    Full text link

    TR-2013009: Algebraic Algorithms

    Full text link

    Computing the Kreiss Constant of a Matrix

    Get PDF

    A low-rank in time approach to PDE-constrained optimization

    Get PDF
    corecore