663 research outputs found
Prediction of Reynolds Stresses in High-Mach-Number Turbulent Boundary Layers using Physics-Informed Machine Learning
Modeled Reynolds stress is a major source of model-form uncertainties in
Reynolds-averaged Navier-Stokes (RANS) simulations. Recently, a
physics-informed machine-learning (PIML) approach has been proposed for
reconstructing the discrepancies in RANS-modeled Reynolds stresses. The merits
of the PIML framework has been demonstrated in several canonical incompressible
flows. However, its performance on high-Mach-number flows is still not clear.
In this work we use the PIML approach to predict the discrepancies in RANS
modeled Reynolds stresses in high-Mach-number flat-plate turbulent boundary
layers by using an existing DNS database. Specifically, the discrepancy
function is first constructed using a DNS training flow and then used to
correct RANS-predicted Reynolds stresses under flow conditions different from
the DNS. The machine-learning technique is shown to significantly improve
RANS-modeled turbulent normal stresses, the turbulent kinetic energy, and the
Reynolds-stress anisotropy. Improvements are consistently observed when
different training datasets are used. Moreover, a high-dimensional
visualization technique and distance metrics are used to provide a priori
assessment of prediction confidence based only on RANS simulations. This study
demonstrates that the PIML approach is a computationally affordable technique
for improving the accuracy of RANS-modeled Reynolds stresses for
high-Mach-number turbulent flows when there is a lack of experiments and
high-fidelity simulations.Comment: 28 pages, 12 figure
Assessment of Neural Network Augmented Reynolds Averaged Navier Stokes Turbulence Model in Extrapolation Modes
A machine-learned (ML) model is developed to enhance the accuracy of
turbulence transport equations of Reynolds Averaged Navier Stokes (RANS) solver
and applied for periodic hill test case, which involves complex flow regimes,
such as attached boundary layer, shear-layer, and separation and reattachment.
The accuracy of the model is investigated in extrapolation modes, i.e., the
test case has much larger separation bubble and higher turbulence than the
training cases. A parametric study is also performed to understand the effect
of network hyperparameters on training and model accuracy and to quantify the
uncertainty in model accuracy due to the non-deterministic nature of the neural
network training. The study revealed that, for any network, less than optimal
mini-batch size results in overfitting, and larger than optimal batch size
reduces accuracy. Data clustering is found to be an efficient approach to
prevent the machine-learned model from over-training on more prevalent flow
regimes, and results in a model with similar accuracy using almost one-third of
the training dataset. Feature importance analysis reveals that turbulence
production is correlated with shear strain in the free-shear region, with shear
strain and wall-distance and local velocity-based Reynolds number in the
boundary layer regime, and with streamwise velocity gradient in the
accelerating flow regime. The flow direction is found to be key in identifying
flow separation and reattachment regime. Machine-learned models perform poorly
in extrapolation mode, wherein the prediction shows less than 10% correlation
with Direct Numerical Simulation (DNS). A priori tests reveal that model
predictability improves significantly as the hill dataset is partially added
during training in a partial extrapolation model, e.g., with the addition of
only 5% of the hill data increases correlation with DNS to 80%.Comment: 50 pages, 18 figure
Evaluation of physics constrained data-driven methods for turbulence model uncertainty quantification
In order to achieve a virtual certification process and robust designs for
turbomachinery, the uncertainty bounds for Computational Fluid Dynamics have to
be known. The formulation of turbulence closure models implies a major source
of the overall uncertainty of Reynolds-averaged Navier-Stokes simulations. We
discuss the common practice of applying a physics constrained eigenspace
perturbation of the Reynolds stress tensor in order to account for the model
form uncertainty of turbulence models. Since the basic methodology often leads
to overly generous uncertainty estimates, we extend a recent approach of adding
a machine learning strategy. The application of a data-driven method is
motivated by striving for the detection of flow regions, which are prone to
suffer from a lack of turbulence model prediction accuracy. In this way any
user input related to choosing the degree of uncertainty is supposed to become
obsolete. This work especially investigates an approach, which tries to
determine an a priori estimation of prediction confidence, when there is no
accurate data available to judge the prediction. The flow around the NACA 4412
airfoil at near-stall conditions demonstrates the successful application of the
data-driven eigenspace perturbation framework. Furthermore, we especially
highlight the objectives and limitations of the underlying methodology
RANS Turbulence Model Development using CFD-Driven Machine Learning
This paper presents a novel CFD-driven machine learning framework to develop
Reynolds-averaged Navier-Stokes (RANS) models. The CFD-driven training is an
extension of the gene expression programming method (Weatheritt and Sandberg,
2016), but crucially the fitness of candidate models is now evaluated by
running RANS calculations in an integrated way, rather than using an algebraic
function. Unlike other data-driven methods that fit the Reynolds stresses of
trained models to high-fidelity data, the cost function for the CFD-driven
training can be defined based on any flow feature from the CFD results. This
extends the applicability of the method especially when the training data is
limited. Furthermore, the resulting model, which is the one providing the most
accurate CFD results at the end of the training, inherently shows good
performance in RANS calculations. To demonstrate the potential of this new
method, the CFD-driven machine learning approach is applied to model
development for wake mixing in turbomachines. A new model is trained based on a
high-pressure turbine case and then tested for three additional cases, all
representative of modern turbine nozzles. Despite the geometric configurations
and operating conditions being different among the cases, the predicted wake
mixing profiles are significantly improved in all of these a posteriori tests.
Moreover, the model equation is explicitly given and available for analysis,
thus it could be deduced that the enhanced wake prediction is predominantly due
to the extra diffusion introduced by the CFD-driven model.Comment: Accepted by Journal of Computational Physic
- …