42 research outputs found
Systematic Study of Accuracy of Wall-Modeled Large Eddy Simulation using Uncertainty Quantification Techniques
The predictive accuracy of wall-modeled large eddy simulation is studied by
systematic simulation campaigns of turbulent channel flow. The effect of wall
model, grid resolution and anisotropy, numerical convective scheme and
subgrid-scale modeling is investigated. All of these factors affect the
resulting accuracy, and their action is to a large extent intertwined. The wall
model is of the wall-stress type, and its sensitivity to location of velocity
sampling, as well as law of the wall's parameters is assessed. For efficient
exploration of the model parameter space (anisotropic grid resolution and wall
model parameter values), generalized polynomial chaos expansions are used to
construct metamodels for the responses which are taken to be measures of the
predictive error in quantities of interest (QoIs). The QoIs include the mean
wall shear stress and profiles of the mean velocity, the turbulent kinetic
energy, and the Reynolds shear stress. DNS data is used as reference. Within
the tested framework, a particular second-order accurate CFD code (OpenFOAM),
the results provide ample support for grid and method parameters
recommendations which are proposed in the present paper, and which provide good
results for the QoIs. Notably, good results are obtained with a grid with
isotropic (cubic) hexahedral cells, with cells per , where
is the channel half-height (or thickness of the turbulent boundary
layer). The importance of providing enough numerical dissipation to obtain
accurate QoIs is demonstrated. The main channel flow case investigated is , but extension to a wide range of -numbers is
considered. Use of other numerical methods and software would likely modify
these recommendations, at least slightly, but the proposed framework is fully
applicable to investigate this as well
A Library for Wall-Modelled Large-Eddy Simulation Based on OpenFOAM Technology
This work presents a feature-rich open-source library for wall-modelled
large-eddy simulation (WMLES), which is a turbulence modelling approach that
reduces the computational cost of traditional (wall-resolved) LES by
introducing special treatment of the inner region of turbulent boundary layers
(TBLs). The library is based on OpenFOAM and enhances the general-purpose LES
solvers provided by this software with state-of-the-art wall modelling
capability. In particular, the included wall models belong to the class of
wall-stress models that account for the under-resolved turbulent structures by
predicting and enforcing the correct local value of the wall shear stress. A
review of this approach is given, followed by a detailed description of the
library, discussing its functionality and extensible design. The included
wall-stress models are presented, based on both algebraic and ordinary
differential equations. To demonstrate the capabilities of the library, it was
used for WMLES of turbulent channel flow and the flow over a backward-facing
step (BFS). For each flow, a systematic simulation campaign was performed, in
order to find a combination of numerical schemes, grid resolution and wall
model type that would yield a good predictive accuracy for both the mean
velocity field in the outer layer of the TBLs and the mean wall shear stress.
The best result was achieved using a mildly dissipative second-order accurate
scheme for the convective fluxes applied on an isotropic grid with 27000 cells
per -cube, where is the thickness of the TBL or the
half-height of the channel. An algebraic model based on Spalding's law of the
wall was found to perform well for both flows. On the other hand, the tested
more complicated models, which incorporate the pressure gradient in the wall
shear stress prediction, led to less accurate results
Assessment of uncertainties in hot-wire anemometry and oil-film interferometry measurements for wall-bounded turbulent flows
In this study, the sources of uncertainty of hot-wire anemometry (HWA) and
oil-film interferometry (OFI) measurements are assessed. Both statistical and
classical methods are used for the forward and inverse problems, so that the
contributions to the overall uncertainty of the measured quantities can be
evaluated. The correlations between the parameters are taken into account
through the Bayesian inference with error-in-variable (EiV) model. In the
forward problem, very small differences were found when using Monte Carlo (MC),
Polynomial Chaos Expansion (PCE) and linear perturbation methods. In flow
velocity measurements with HWA, the results indicate that the estimated
uncertainty is lower when the correlations among parameters are considered,
than when they are not taken into account. Moreover, global sensitivity
analyses with Sobol indices showed that the HWA measurements are most sensitive
to the wire voltage, and in the case of OFI the most sensitive factor is the
calculation of fringe velocity. The relative errors in wall-shear stress,
friction velocity and viscous length are 0.44%, 0.23% and 0.22%, respectively.
Note that these values are lower than the ones reported in other wall-bounded
turbulence studies. Note that in most studies of wall-bounded turbulence the
correlations among parameters are not considered, and the uncertainties from
the various parameters are directly added when determining the overall
uncertainty of the measured quantity. In the present analysis we account for
these correlations, which may lead to a lower overall uncertainty estimate due
to error cancellation. Furthermore, our results also indicate that the crucial
aspect when obtaining accurate inner-scaled velocity measurements is the
wind-tunnel flow quality, which is more critical than the accuracy in
wall-shear stress measurements
Effect of grid resolution on large eddy simulation of wall-bounded turbulence
The effect of grid resolution on large eddy simulation (LES) of wall-bounded
turbulent flow is investigated. A channel flow simulation campaign involving
systematic variation of the streamwise () and spanwise ()
grid resolution is used for this purpose. The main friction-velocity based
Reynolds number investigated is 300. Near the walls, the grid cell size is
determined by the frictional scaling, and , and
strongly anisotropic cells, with first , thus aiming for
wall-resolving LES. Results are compared to direct numerical simulations (DNS)
and several quality measures are investigated, including the error in the
predicted mean friction velocity and the error in cross-channel profiles of
flow statistics. To reduce the total number of channel flow simulations,
techniques from the framework of uncertainty quantification (UQ) are employed.
In particular, generalized polynomial chaos expansion (gPCE) is used to create
meta models for the errors over the allowed parameter ranges. The differing
behavior of the different quality measures is demonstrated and analyzed. It is
shown that friction velocity, and profiles of velocity and the Reynolds stress
tensor, are most sensitive to , while the error in the turbulent
kinetic energy is mostly influenced by . Recommendations for grid
resolution requirements are given, together with quantification of the
resulting predictive accuracy. The sensitivity of the results to subgrid-scale
(SGS) model and varying Reynolds number is also investigated. All simulations
are carried out with second-order accurate finite-volume based solver. The
choice of numerical methods and SGS model is expected to influence the
conclusions, but it is emphasized that the proposed methodology, involving
gPCE, can be applied to other modeling approaches as well.Comment: 27 pages, The following article has been accepted by Physics of
Fluids. After it is published, it will be found at
https://aip.scitation.org/journal/phf. Copyright 2018 Saleh Rezaeiravesh and
Mattias Liefvendahl. This article is distributed under a Creative Commons
Attribution (CC-BY-NC-ND 4.0) Licens
Direct numerical simulation of turbulent pipe flow up to
Well-resolved direct numerical simulations (DNSs) have been performed of the
flow in a smooth circular pipe of radius and axial length at
friction Reynolds numbers up to . Various turbulence statistics
are documented and compared with other DNS and experimental data in pipes as
well as channels.Small but distinct differences between various datasets are
identified. The friction factor overshoots by and undershoots
by of the Prandtl friction law at low and high ranges,
respectively. In addition, in our results is slightly higher than
that in Pirozzoli et al. (J. Fluid. Mech., 926, A28, 2021), but matches well
with the experiments in Furuichi et al. (Phys. Fluids, 27, 095108, 2015). The
log-law indicator function, which is nearly indistinguishable between the pipe
and channel flows up to , has not yet developed a plateau further away
from the wall in the pipes even for the cases. The wall shear
stress fluctuations and the inner peak of the axial velocity intensity -- which
grow monotonically with -- are lower in the pipe than in the channel,
but the difference decreases with increasing . While the wall values
are slightly lower in channel than pipe flows at the same , the inner
peaks of the pressure fluctuations show negligible differences between them.
The Reynolds number scaling of all these quantities agrees with both the
logarithmic and defect power laws if the coefficients are properly chosen. The
one-dimensional spectrum of the axial velocity fluctuation exhibits a
dependence at an intermediate distance from the wall -- as also seen in the
channel flow. In summary, this high-fidelity data enable us to provide better
insights into the flow physics in the pipes and the similarity/difference among
different types of wall turbulence.Comment: 22 pages, 15 figure
In-situ Estimation of Time-averaging Uncertainties in Turbulent Flow Simulations
The statistics obtained from turbulent flow simulations are generally
uncertain due to finite time averaging. The techniques available in the
literature to accurately estimate these uncertainties typically only work in an
offline mode, that is, they require access to all available samples of a time
series at once. In addition to the impossibility of online monitoring of
uncertainties during the course of simulations, such an offline approach can
lead to input/output (I/O) deficiencies and large storage/memory requirements,
which can be problematic for large-scale simulations of turbulent flows. Here,
we designed, implemented and tested a framework for estimating time-averaging
uncertainties in turbulence statistics in an in-situ
(online/streaming/updating) manner. The proposed algorithm relies on a novel
low-memory update formula for computing the sample-estimated autocorrelation
functions (ACFs). Based on this, smooth modeled ACFs of turbulence quantities
can be generated to accurately estimate the time-averaging uncertainties in the
corresponding sample mean estimators. The resulting uncertainty estimates are
highly robust, accurate, and quantitatively the same as those obtained by
standard offline estimators. Moreover, the computational overhead added by the
in-situ algorithm is found to be negligible. The framework is completely
general and can be used with any flow solver and also integrated into the
simulations over conformal and complex meshes created by adopting adaptive mesh
refinement techniques. The results of the study are encouraging for the further
development of the in-situ framework for other uncertainty quantification and
data-driven analyses relevant not only to large-scale turbulent flow
simulations, but also to the simulation of other dynamical systems leading to
time-varying quantities with autocorrelated samples
Applying Bayesian Optimization with Gaussian Process Regression to Computational Fluid Dynamics Problems
Bayesian optimization (BO) based on Gaussian process regression (GPR) is
applied to different CFD (computational fluid dynamics) problems which can be
of practical relevance. The problems are i) shape optimization in a lid-driven
cavity to minimize or maximize the energy dissipation, ii) shape optimization
of the wall of a channel flow in order to obtain a desired pressure-gradient
distribution along the edge of the turbulent boundary layer formed on the other
wall, and finally, iii) optimization of the controlling parameters of a
spoiler-ice model to attain the aerodynamic characteristics of the airfoil with
an actual surface ice. The diversity of the optimization problems, independence
of the optimization approach from any adjoint information, the ease of
employing different CFD solvers in the optimization loop, and more importantly,
the relatively small number of the required flow simulations reveal the
flexibility, efficiency, and versatility of the BO-GPR approach in CFD
applications. It is shown that to ensure finding the global optimum of the
design parameters of the size up to 8, less than 90 executions of the CFD
solvers are needed. Furthermore, it is observed that the number of flow
simulations does not significantly increase with the number of design
parameters. The associated computational cost of these simulations can be
affordable for many optimization cases with practical relevance
Quantification of time-averaging uncertainties in turbulence simulations
An automatic method is proposed for the removal of the initialization bias that is intrinsic to the output of any statistically stationary simulation. The general techniques based on optimization approaches such as Beyhaghi et al. [1] following the Marginal Standard Error Rules (MSER) method of White et al. [16] were observed to be highly sensitive to the fluctuations in a time series and resulted in frequent overprediction of the length of the initial truncation. As fluctuations are an innate part of turbulence data, these techniques performed poorly on turbulence quantities, meaning that the local minima was often wrongly interpreted as the minimum variance in the time series and resulted in different transient point predictions for any increments to the sample size. This limitation was overcome by considering the finite difference of the slope of the variance computed in the optimization algorithm. The start of the zero slope region was considered as the initial transient truncation point. This modification to the standard approach eliminated the sensitivity of the scheme, and led to consistent estimates of the transient truncation point, provided that the finite difference time interval was chosen large enough to cover the fluctuations in the time series. Therefore, the step size for the finite difference slope was computed using both visual inspection of the time series and trial and error. We propose the Augmented Dickey­Fuller test as an automatic and reliable method to determine the truncation point, from which the time series is considered stationary and without an initialization bias