56 research outputs found

    Neural networks in feedback for flow analysis, sensor placement and control

    Full text link
    This work presents a novel methodology for analysis and control of nonlinear fluid systems using neural networks. The approach is demonstrated on four different study cases being the Lorenz system, a modified version of the Kuramoto-Sivashinsky equation, a streamwise-periodic 2D channel flow, and a confined cylinder flow. Neural networks are trained as models to capture the complex system dynamics and estimate equilibrium points through a Newton method, enabled by backpropagation. These neural network surrogate models (NNSMs) are leveraged to train a second neural network, which is designed to act as a stabilizing closed-loop controller. The training process employs a recurrent approach, whereby the NNSM and the neural network controller (NNC) are chained in closed loop along a finite time horizon. By cycling through phases of combined random open-loop actuation and closed-loop control, an iterative training process is introduced to overcome the lack of data near equilibrium points. This approach improves the accuracy of the models in the most critical region for achieving stabilization. Through the use of L1 regularization within loss functions, the NNSMs can also guide optimal sensor placement, reducing the number of sensors from an initial candidate set. The datasets produced during the iterative training process are also leveraged for conducting a linear stability analysis through a modified dynamic mode decomposition approach. The results demonstrate the effectiveness of computationally inexpensive neural networks in modeling, controlling, and enabling stability analysis of nonlinear systems, providing insights into the system behaviour and offering potential for stabilization of complex fluid systems.Comment: 30 pages, 22 figures, under consideration for publicatio

    Neural Implicit Flow: a mesh-agnostic dimensionality reduction paradigm of spatio-temporal data

    Full text link
    High-dimensional spatio-temporal dynamics can often be encoded in a low-dimensional subspace. Engineering applications for modeling, characterization, design, and control of such large-scale systems often rely on dimensionality reduction to make solutions computationally tractable in real-time. Common existing paradigms for dimensionality reduction include linear methods, such as the singular value decomposition (SVD), and nonlinear methods, such as variants of convolutional autoencoders (CAE). However, these encoding techniques lack the ability to efficiently represent the complexity associated with spatio-temporal data, which often requires variable geometry, non-uniform grid resolution, adaptive meshing, and/or parametric dependencies. To resolve these practical engineering challenges, we propose a general framework called Neural Implicit Flow (NIF) that enables a mesh-agnostic, low-rank representation of large-scale, parametric, spatial-temporal data. NIF consists of two modified multilayer perceptrons (MLPs): (i) ShapeNet, which isolates and represents the spatial complexity, and (ii) ParameterNet, which accounts for any other input complexity, including parametric dependencies, time, and sensor measurements. We demonstrate the utility of NIF for parametric surrogate modeling, enabling the interpretable representation and compression of complex spatio-temporal dynamics, efficient many-spatial-query tasks, and improved generalization performance for sparse reconstruction.Comment: 56 page

    Reconstructing flow from thermal wall imprint

    Get PDF
    This thesis develops data-driven flow reconstruction methods to reconstruct the velocity of plane Couette flow from wall temperature. We performed a Direct Numerical Simulation (DNS) for a heated plane Couette flow with imposed flux boundary condition at the bottom wall to create a data-set. Due to the the imposed flux the temperature at bottom wall is free and wall temperature patterns can develop. The focus of this thesis is on the investigations of the strong correlation between the flow velocity and the wall temperature. We analyse their joint probability density function and cross variance spectrum to develop a spectral linear regression model. This model successfully reconstructs wall shear stress from wall temperature except possibly at peaks. To reconstruct flow velocity from wall temperature, we apply flow decomposition modes such as the Proper Orthogonal Decomposition (POD) modes \cite{Holmes2012ProperDecomposition}. We design test problems to develop a framework to reconstruct \emph{gappy} fields with missing information. In this framework, we prescribe suitable regularisation for the under-determined \emph{gappy} fields. We also develop a decomposition method - the subdomain POD method which divides a physical domain into a number of subdomains and then applies the POD method in each subdomain individually. This subdomain POD are locally optimised and inherits properties of the POD modes. In both cases, namely the POD and the subdomain POD method, the reconstructions are found to be in good agreement with the flow velocity obtained form the DNS. To develop data-driven methods with imposed physical constraints, we propose a linear dynamical model based on Orr-Sommerfeld-Squire \cite{Kim2007,Murray2006} system and the scalar transport equation. This model successfully reconstruct some of the key flow structures at z+=35z^+=35.Open Acces

    Unified Long-Term Time-Series Forecasting Benchmark

    Full text link
    In order to support the advancement of machine learning methods for predicting time-series data, we present a comprehensive dataset designed explicitly for long-term time-series forecasting. We incorporate a collection of datasets obtained from diverse, dynamic systems and real-life records. Each dataset is standardized by dividing it into training and test trajectories with predetermined lookback lengths. We include trajectories of length up to 20002000 to ensure a reliable evaluation of long-term forecasting capabilities. To determine the most effective model in diverse scenarios, we conduct an extensive benchmarking analysis using classical and state-of-the-art models, namely LSTM, DeepAR, NLinear, N-Hits, PatchTST, and LatentODE. Our findings reveal intriguing performance comparisons among these models, highlighting the dataset-dependent nature of model effectiveness. Notably, we introduce a custom latent NLinear model and enhance DeepAR with a curriculum learning phase. Both consistently outperform their vanilla counterparts
    • …
    corecore