27 research outputs found

    Investigation of the generalization capability of a generative adversarial network for large eddy simulation of turbulent premixed reacting flows

    Get PDF
    In the past decades, Deep Learning (DL) frameworks have demonstrated excellent performance in modeling nonlinear interactions and are a promising technique to move beyond physics-based models. In this context, super-resolution techniques may present an accurate approach as subfilter-scale (SFS) closure model for Large Eddy Simulations (LES) in premixed combustion. However, DL models need to perform accurately in a variety of physical regimes and generalize well beyond their training conditions. In this work, a super-resolution Generative Adversarial Network (GAN) is proposed as closure model for the unresolved subfilter-stress and scalar-flux tensors of the filtered reactive Navier-Stokes equations solved in LES. The model trained on a premixed methane/air jet flame is evaluated a-priori on similar configurations at different Reynolds and Karlovitz numbers. The GAN generalizes well at both lower and higher Reynolds numbers and outperforms existing algebraic models when the ratio between the filter size and the Kolmogorov scale is preserved. Moreover, extrapolation at a higher Karlovitz number is investigated indicating that the ratio between the filter size and the thermal flame thickness may not need to be conserved in order to achieve high correlation in terms of SFS field. Generalization studies obtained on substantially different flame conditions indicate that successful predictive abilities are demonstrated if the generalization criterion is matched. Finally, the reconstruction of a scalar quantity, different from that used during the training, is evaluated, revealing that the model is able to reconstruct scalar fields with large gradients that have not been explicitly used in the training. The a-priori investigations carried out assess whether out-of-sample predictions are even feasible in the first place, providing insights into the quantities that need to be conserved for the model to perform well between different regimes, and represent a crucial step toward future embedding into LES numerical solvers

    Machine Learning and Its Application to Reacting Flows

    Get PDF
    This open access book introduces and explains machine learning (ML) algorithms and techniques developed for statistical inferences on a complex process or system and their applications to simulations of chemically reacting turbulent flows. These two fields, ML and turbulent combustion, have large body of work and knowledge on their own, and this book brings them together and explain the complexities and challenges involved in applying ML techniques to simulate and study reacting flows. This is important as to the world’s total primary energy supply (TPES), since more than 90% of this supply is through combustion technologies and the non-negligible effects of combustion on environment. Although alternative technologies based on renewable energies are coming up, their shares for the TPES is are less than 5% currently and one needs a complete paradigm shift to replace combustion sources. Whether this is practical or not is entirely a different question, and an answer to this question depends on the respondent. However, a pragmatic analysis suggests that the combustion share to TPES is likely to be more than 70% even by 2070. Hence, it will be prudent to take advantage of ML techniques to improve combustion sciences and technologies so that efficient and “greener” combustion systems that are friendlier to the environment can be designed. The book covers the current state of the art in these two topics and outlines the challenges involved, merits and drawbacks of using ML for turbulent combustion simulations including avenues which can be explored to overcome the challenges. The required mathematical equations and backgrounds are discussed with ample references for readers to find further detail if they wish. This book is unique since there is not any book with similar coverage of topics, ranging from big data analysis and machine learning algorithm to their applications for combustion science and system design for energy generation

    Influence of adversarial training on super-resolution turbulence reconstruction

    Full text link
    Supervised super-resolution deep convolutional neural networks (CNNs) have gained significant attention for their potential in reconstructing velocity and scalar fields in turbulent flows. Despite their popularity, CNNs currently lack the ability to accurately produce high-frequency and small-scale features, and tests of their generalizability to out-of-sample flows are not widespread. Generative adversarial networks (GANs), which consist of two distinct neural networks (NNs), a generator and discriminator, are a promising alternative, allowing for both semi-supervised and unsupervised training. The difference in the flow fields produced by these two NN architectures has not been thoroughly investigated, and a comprehensive understanding of the discriminator's role has yet to be developed. This study assesses the effectiveness of the unsupervised adversarial training in GANs for turbulence reconstruction in forced homogeneous isotropic turbulence. GAN-based architectures are found to outperform supervised CNNs for turbulent flow reconstruction for in-sample cases. The reconstruction accuracy of both architectures diminishes for out-of-sample cases, though the GAN's discriminator network significantly improves the generator's out-of-sample robustness using either an additional unsupervised training step with large eddy simulation input fields and a dynamic selection of the most suitable upsampling factor. These enhance the generator's ability to reconstruct small-scale gradients, turbulence intermittency, and velocity-gradient probability density functions. The extrapolation capability of the GAN-based model is demonstrated for out-of-sample flows at higher Reynolds numbers. Based on these findings, incorporating discriminator-based training is recommended to enhance the reconstruction capability of super-resolution CNNs

    Machine Learning and Its Application to Reacting Flows

    Get PDF
    This open access book introduces and explains machine learning (ML) algorithms and techniques developed for statistical inferences on a complex process or system and their applications to simulations of chemically reacting turbulent flows. These two fields, ML and turbulent combustion, have large body of work and knowledge on their own, and this book brings them together and explain the complexities and challenges involved in applying ML techniques to simulate and study reacting flows. This is important as to the world’s total primary energy supply (TPES), since more than 90% of this supply is through combustion technologies and the non-negligible effects of combustion on environment. Although alternative technologies based on renewable energies are coming up, their shares for the TPES is are less than 5% currently and one needs a complete paradigm shift to replace combustion sources. Whether this is practical or not is entirely a different question, and an answer to this question depends on the respondent. However, a pragmatic analysis suggests that the combustion share to TPES is likely to be more than 70% even by 2070. Hence, it will be prudent to take advantage of ML techniques to improve combustion sciences and technologies so that efficient and “greener” combustion systems that are friendlier to the environment can be designed. The book covers the current state of the art in these two topics and outlines the challenges involved, merits and drawbacks of using ML for turbulent combustion simulations including avenues which can be explored to overcome the challenges. The required mathematical equations and backgrounds are discussed with ample references for readers to find further detail if they wish. This book is unique since there is not any book with similar coverage of topics, ranging from big data analysis and machine learning algorithm to their applications for combustion science and system design for energy generation

    PhySRNet: Physics informed super-resolution network for application in computational solid mechanics

    Full text link
    Traditional approaches based on finite element analyses have been successfully used to predict the macro-scale behavior of heterogeneous materials (composites, multicomponent alloys, and polycrystals) widely used in industrial applications. However, this necessitates the mesh size to be smaller than the characteristic length scale of the microstructural heterogeneities in the material leading to computationally expensive and time-consuming calculations. The recent advances in deep learning based image super-resolution (SR) algorithms open up a promising avenue to tackle this computational challenge by enabling researchers to enhance the spatio-temporal resolution of data obtained from coarse mesh simulations. However, technical challenges still remain in developing a high-fidelity SR model for application to computational solid mechanics, especially for materials undergoing large deformation. This work aims at developing a physics-informed deep learning based super-resolution framework (PhySRNet) which enables reconstruction of high-resolution deformation fields (displacement and stress) from their low-resolution counterparts without requiring high-resolution labeled data. We design a synthetic case study to illustrate the effectiveness of the proposed framework and demonstrate that the super-resolved fields match the accuracy of an advanced numerical solver running at 400 times the coarse mesh resolution while simultaneously satisfying the (highly nonlinear) governing laws. The approach opens the door to applying machine learning and traditional numerical approaches in tandem to reduce computational complexity accelerate scientific discovery and engineering design.Comment: 14 pages, 3 figures, arXiv admin note: text overlap with arXiv:2112.0867

    Redefining Super-Resolution: Fine-mesh PDE predictions without classical simulations

    Full text link
    In Computational Fluid Dynamics (CFD), coarse mesh simulations offer computational efficiency but often lack precision. Applying conventional super-resolution to these simulations poses a significant challenge due to the fundamental contrast between downsampling high-resolution images and authentically emulating low-resolution physics. The former method conserves more of the underlying physics, surpassing the usual constraints of real-world scenarios. We propose a novel definition of super-resolution tailored for PDE-based problems. Instead of simply downsampling from a high-resolution dataset, we use coarse-grid simulated data as our input and predict fine-grid simulated outcomes. Employing a physics-infused UNet upscaling method, we demonstrate its efficacy across various 2D-CFD problems such as discontinuity detection in Burger's equation, Methane combustion, and fouling in Industrial heat exchangers. Our method enables the generation of fine-mesh solutions bypassing traditional simulation, ensuring considerable computational saving and fidelity to the original ground truth outcomes. Through diverse boundary conditions during training, we further establish the robustness of our method, paving the way for its broad applications in engineering and scientific CFD solvers.Comment: Accepted at Machine Learning and the Physical Sciences Workshop, NeurIPS 202
    corecore