141 research outputs found

    Operator compression with deep neural networks

    Get PDF

    Annual Research Report 2020

    Get PDF

    Model Reduction of Synchronized Lur'e Networks

    Get PDF
    In this talk, we investigate a model order reduction schemethat reduces the complexity of uncertain dynamical networks consisting of diffusively interconnected nonlinearLure subsystems. We aim to reduce the dimension ofeach subsystem and meanwhile preserve the synchronization property of the overall network. Using the upperbound of the Laplacian spectral radius, we first characterize the robust synchronization of the Lure network bya linear matrix equation (LMI), whose solutions can betreated as generalized Gramians of each subsystem, andthus the balanced truncation can be performed on the linear component of each Lure subsystem. As a result, thedimension of the each subsystem is reduced, and the dynamics of the network is simplified. It is verified that, withthe same communication topology, the resulting reducednetwork system is still robustly synchronized, and the apriori bound on the approximation error is guaranteed tocompare the behaviors of the full-order and reduced-orderLure subsyste

    Randomized quasi-optimal local approximation spaces in time

    Full text link
    We target time-dependent partial differential equations (PDEs) with heterogeneous coefficients in space and time. To tackle these problems, we construct reduced basis/ multiscale ansatz functions defined in space that can be combined with time stepping schemes within model order reduction or multiscale methods. To that end, we propose to perform several simulations of the PDE for few time steps in parallel starting at different, randomly drawn start points, prescribing random initial conditions; applying a singular value decomposition to a subset of the so obtained snapshots yields the reduced basis/ multiscale ansatz functions. This facilitates constructing the reduced basis/ multiscale ansatz functions in an embarrassingly parallel manner. In detail, we suggest using a data-dependent probability distribution based on the data functions of the PDE to select the start points. Each local in time simulation of the PDE with random initial conditions approximates a local approximation space in one time point that is optimal in the sense of Kolmogorov. The derivation of these optimal local approximation spaces which are spanned by the left singular vectors of a compact transfer operator that maps arbitrary initial conditions to the solution of the PDE in a later point of time, is one other main contribution of this paper. By solving the PDE locally in time with random initial conditions, we construct local ansatz spaces in time that converge provably at a quasi-optimal rate and allow for local error control. Numerical experiments demonstrate that the proposed method can outperform existing methods like the proper orthogonal decomposition even in a sequential setting and is well capable of approximating advection-dominated problems

    Annual Research Report 2021

    Get PDF
    corecore