2,573 research outputs found

    Sub-grid modelling for two-dimensional turbulence using neural networks

    Get PDF
    In this investigation, a data-driven turbulence closure framework is introduced and deployed for the sub-grid modelling of Kraichnan turbulence. The novelty of the proposed method lies in the fact that snapshots from high-fidelity numerical data are used to inform artificial neural networks for predicting the turbulence source term through localized grid-resolved information. In particular, our proposed methodology successfully establishes a map between inputs given by stencils of the vorticity and the streamfunction along with information from two well-known eddy-viscosity kernels. Through this we predict the sub-grid vorticity forcing in a temporally and spatially dynamic fashion. Our study is both a-priori and a-posteriori in nature. In the former, we present an extensive hyper-parameter optimization analysis in addition to learning quantification through probability density function based validation of sub-grid predictions. In the latter, we analyse the performance of our framework for flow evolution in a classical decaying two-dimensional turbulence test case in the presence of errors related to temporal and spatial discretization. Statistical assessments in the form of angle-averaged kinetic energy spectra demonstrate the promise of the proposed methodology for sub-grid quantity inference. In addition, it is also observed that some measure of a-posteriori error must be considered during optimal model selection for greater accuracy. The results in this article thus represent a promising development in the formalization of a framework for generation of heuristic-free turbulence closures from data

    Multiscale Computations on Neural Networks: From the Individual Neuron Interactions to the Macroscopic-Level Analysis

    Full text link
    We show how the Equation-Free approach for multi-scale computations can be exploited to systematically study the dynamics of neural interactions on a random regular connected graph under a pairwise representation perspective. Using an individual-based microscopic simulator as a black box coarse-grained timestepper and with the aid of simulated annealing we compute the coarse-grained equilibrium bifurcation diagram and analyze the stability of the stationary states sidestepping the necessity of obtaining explicit closures at the macroscopic level. We also exploit the scheme to perform a rare-events analysis by estimating an effective Fokker-Planck describing the evolving probability density function of the corresponding coarse-grained observables

    Fast Neural Network Predictions from Constrained Aerodynamics Datasets

    Full text link
    Incorporating computational fluid dynamics in the design process of jets, spacecraft, or gas turbine engines is often challenged by the required computational resources and simulation time, which depend on the chosen physics-based computational models and grid resolutions. An ongoing problem in the field is how to simulate these systems faster but with sufficient accuracy. While many approaches involve simplified models of the underlying physics, others are model-free and make predictions based only on existing simulation data. We present a novel model-free approach in which we reformulate the simulation problem to effectively increase the size of constrained pre-computed datasets and introduce a novel neural network architecture (called a cluster network) with an inductive bias well-suited to highly nonlinear computational fluid dynamics solutions. Compared to the state-of-the-art in model-based approximations, we show that our approach is nearly as accurate, an order of magnitude faster, and easier to apply. Furthermore, we show that our method outperforms other model-free approaches

    RANS Equations with Explicit Data-Driven Reynolds Stress Closure Can Be Ill-Conditioned

    Full text link
    Reynolds-averaged Navier--Stokes (RANS) simulations with turbulence closure models continue to play important roles in industrial flow simulations. However, the commonly used linear eddy viscosity models are intrinsically unable to handle flows with non-equilibrium turbulence. Reynolds stress models, on the other hand, are plagued by their lack of robustness. Recent studies in plane channel flows found that even substituting Reynolds stresses with errors below 0.5% from direct numerical simulation (DNS) databases into RANS equations leads to velocities with large errors (up to 35%). While such an observation may have only marginal relevance to traditional Reynolds stress models, it is disturbing for the recently emerging data-driven models that treat the Reynolds stress as an explicit source term in the RANS equations, as it suggests that the RANS equations with such models can be ill-conditioned. So far, a rigorous analysis of the condition of such models is still lacking. As such, in this work we propose a metric based on local condition number function for a priori evaluation of the conditioning of the RANS equations. We further show that the ill-conditioning cannot be explained by the global matrix condition number of the discretized RANS equations. Comprehensive numerical tests are performed on turbulent channel flows at various Reynolds numbers and additionally on two complex flows, i.e., flow over periodic hills and flow in a square duct. Results suggest that the proposed metric can adequately explain observations in previous studies, i.e., deteriorated model conditioning with increasing Reynolds number and better conditioning of the implicit treatment of Reynolds stress compared to the explicit treatment. This metric can play critical roles in the future development of data-driven turbulence models by enforcing the conditioning as a requirement on these models.Comment: 35 pages, 18 figure

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
    corecore