37 research outputs found

    Interpolation Based Parametric Model Order Reduction

    Get PDF
    In this thesis, we consider model order reduction of parameter-dependent large-scale dynamical systems. The objective is to develop a methodology to reduce the order of the model and simultaneously preserve the dependence of the model on parameters. We use the balanced truncation method together with spline interpolation to solve the problem. The core of this method is to interpolate the reduced transfer function, based on the pre-computed transfer function at a sample in the parameter domain. Linear splines and cubic splines are employed here. The use of the latter, as expected, reduces the error of the method. The combination is proven to inherit the advantages of balanced truncation such as stability preservation and, based on a novel bound for the infinity norm of the matrix inverse, the derivation of error bounds. Model order reduction can be formulated in the projection framework. In the case of a parameter-dependent system, the projection subspace also depends on parameters. One cannot compute this parameter-dependent projection subspace, but has to approximate it by interpolation based on a set of pre-computed subspaces. It turns out that this is the problem of interpolation on Grassmann manifolds. The interpolation process is actually performed on tangent spaces to the underlying manifold. To do that, one has to invoke the exponential and logarithmic mappings which involve some singular value decompositions. The whole procedure is then divided into the offline and online stage. The computation time in the online stage is a crucial point. By investigating the formulation of exponential and logarithmic mappings and analyzing the structure of sums of singular value decompositions, we succeed to reduce the computational complexity of the online stage and therefore enable the use of this algorithm in real time

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Flow control and sensing using data-driven reduced-order modeling

    Get PDF
    Transfer operators, such as the Koopman operator, are driving a paradigm shift in how we perform data-driven modeling of fluid flows. Approximations of the Koopman operator provide linear representations even for strongly nonlinear flows, which enables the application of standard linear methods for estimation and control under realistic flow conditions. In the past decade, we have witnessed several breakthroughs in obtaining low-dimensional approximations of the Koopman operators, providing a tractable reduced-order model for complex fluid flows using data from numerical simulations or experiments. In this thesis, we leverage these recent developments in operator-theoretic modeling of fluid flows and provide data-driven solutions to the flow control and sensing problems. The contributions of this thesis can be divided into three parts. In the first part, we introduce a novel method, low-rank Dynamic Mode Decomposition (lrDMD), for data-driven reduced-order modeling of fluid flows. While existing data-driven modeling methods fit an endomorphic linear function on a low-dimensional subspace, lrDMD approximates flow dynamics using a linear map between different subspaces. We show that this approach leads to the design of better reduced-order feedback controllers. We formulate a rank-constrained matrix optimization problem and propose two complementary methods to solve the problem. lrDMD outperforms existing methods in feedback control and optimal actuator placement. In the second part, we present a completely data-driven framework for sensor placement in fluid flows. This framework can be applied in conjunction with any reduced-order modeling technique that constructs a linear model for the flow dynamics. We formulate an optimization problem that minimizes the trace of a data-driven approximation of the estimation error covariance matrix, where the estimates are provided by a Kalman filter. We propose an efficient adjoint-based gradient descent method to solve the optimization problem. We show that sensors placed using our method lead to better performance in important applications, such as flow estimation and control, compared to existing data-driven sensor placement methods. In the third and final part, we propose a new method of interface tracking and reconstruction in multiphase flows using shadowgraphs or back-lit imaging data. First, we show that while traditional modeling methods provide valuable information about the spatio-temporal structure of flow instabilities, they are not able to resolve spatial or temporal discontinuities, such as the liquid-gas interface, in the data. To remedy this, we propose a two-step approach, using data-driven modeling techniques in conjunction with optical flow methods, that preserves sharp interfaces and provides reliable reconstruction and short-time prediction of the flow. We apply our method to an experimental liquid jet with a co-axial air-blast atomizer using back-lit imaging. Our method is able to accurately reconstruct and predict the flow while preserving the sharpness of the liquid-gas interface

    Machine Learning, Low-Rank Approximations and Reduced Order Modeling in Computational Mechanics

    Get PDF
    The use of machine learning in mechanics is booming. Algorithms inspired by developments in the field of artificial intelligence today cover increasingly varied fields of application. This book illustrates recent results on coupling machine learning with computational mechanics, particularly for the construction of surrogate models or reduced order models. The articles contained in this compilation were presented at the EUROMECH Colloquium 597, « Reduced Order Modeling in Mechanics of Materials », held in Bad Herrenalb, Germany, from August 28th to August 31th 2018. In this book, Artificial Neural Networks are coupled to physics-based models. The tensor format of simulation data is exploited in surrogate models or for data pruning. Various reduced order models are proposed via machine learning strategies applied to simulation data. Since reduced order models have specific approximation errors, error estimators are also proposed in this book. The proposed numerical examples are very close to engineering problems. The reader would find this book to be a useful reference in identifying progress in machine learning and reduced order modeling for computational mechanics

    Numerical and variational aspects of mesh parameterization and editing

    Get PDF
    A surface parameterization is a smooth one-to-one mapping between the surface and a parametric domain. Typically, surfaces with disk topology are mapped onto the plane and genus-zero surfaces onto the sphere. As any attempt to flatten a non-trivial surface onto the plane will inevitably induce a certain amount of distortion, the main concern of research on this topic is to minimize parametric distortion. This thesis aims at presenting a balanced blend of mathematical rigor and engineering intuition to address the challenges raised by the mesh parameterization problem. We study the numerical aspects of mesh parameterization in the light of parallel developments in both mathematics and engineering. Furthermore, we introduce the concept of quasi-harmonic maps for reducing distortion in the fixed boundary case and extend it to both the free boundary and the spherical case. Thinking of parameterization in a more general sense as the construction of one or several scalar fields on a surface, we explore the potential of this construction for mesh deformation and surface matching. We propose an \u27;on-surface parameterization\u27; for guiding the deformation process and performing surface matching. A direct harmonic interpolation in the quaternion domain is also shown to give promising results for deformation transfer.Eine FlĂ€chenparameterisierung ist eine globale bijektive Abbildung zwischen der FlĂ€che und einem zugehörigen parametrischen Gebiet. Gewöhnlich werden FlĂ€chen mit scheibenförmiger Topologie auf eine Kreisscheibe und FlĂ€chen mit Genus Null auf eine SphĂ€re abgebildet. Das Hauptinteresse der Forschung an diesem Thema ist die Minimierung der parametrischen Verzerrung, die unweigerlich bei jedem Versuch, eine nicht triviale FlĂ€che ĂŒber einer Ebene zu parameterisieren, erzeugt wird. Diese Arbeit strebt zur Behandlung des Parametrisierungsproblems eine ausgeglichene Mischung zwischen mathematischer PrĂ€zision und ingenieurwissenschaftlicher Intuition an. Wir behandeln dabei die numerischen Aspekte des Parameterisierungsproblems im Hinblick auf die aktuellen parallelen Entwicklungen in der Mathematik und den Ingenieurwissenschaften. Weiterhin fĂŒhren wir das Konzept der quasi-harmonischen Abbildungen ein, um die Verzerrung bei gegebenen Randbedingungen zu verringern. Anschließend verallgemeinern wir dieses Konzept auf den sphĂ€rischen Fall und auf den Fall mit freien Randbedingungen. Durch allgemeinere Betrachtung der Parameterisierung als Konstruktion eines oder mehrerer skalarer Felder auf einer FlĂ€che ergibt sich ein neuer Ansatz zur Netzdeformation und der Erzeugung von FlĂ€chenkorrespondenzen. Wir stellen eine \u27;on-surface parameterization\u27; vor, welche den Deformationsprozess leitet und FlĂ€chenkorrespondenzen erstellt. DarĂŒber hinaus zeigt eine direkte harmonische Interpolation in der DomĂ€ne der Quaternionen auch vielversprechende Resultate fĂŒr die Übertragung von Deformationen
    corecore