9,345 research outputs found
Large Eddy Simulations of gaseous flames in gas turbine combustion chambers
Recent developments in numerical schemes, turbulent combustion models and the regular increase of computing power allow Large Eddy Simulation (LES) to be applied to real industrial burners. In this paper, two types of LES in complex geometry combustors and of specific interest for aeronautical gas turbine burners are reviewed: (1) laboratory-scale combustors, without compressor or turbine, in which advanced measurements are possible and (2) combustion chambers of existing engines operated in realistic operating conditions. Laboratory-scale burners are designed to assess modeling and funda- mental flow aspects in controlled configurations. They are necessary to gauge LES strategies and identify potential limitations. In specific circumstances, they even offer near model-free or DNS-like LES computations. LES in real engines illustrate the potential of the approach in the context of industrial burners but are more difficult to validate due to the limited set of available measurements. Usual approaches for turbulence and combustion sub-grid models including chemistry modeling are first recalled. Limiting cases and range of validity of the models are specifically recalled before a discussion on the numerical breakthrough which have allowed LES to be applied to these complex cases. Specific issues linked to real gas turbine chambers are discussed: multi-perforation, complex acoustic impedances at inlet and outlet, annular chambers.. Examples are provided for mean flow predictions (velocity, temperature and species) as well as unsteady mechanisms (quenching, ignition, combustion instabil- ities). Finally, potential perspectives are proposed to further improve the use of LES for real gas turbine combustor designs
Revisiting the Local Scaling Hypothesis in Stably Stratified Atmospheric Boundary Layer Turbulence: an Integration of Field and Laboratory Measurements with Large-eddy Simulations
The `local scaling' hypothesis, first introduced by Nieuwstadt two decades
ago, describes the turbulence structure of stable boundary layers in a very
succinct way and is an integral part of numerous local closure-based numerical
weather prediction models. However, the validity of this hypothesis under very
stable conditions is a subject of on-going debate. In this work, we attempt to
address this controversial issue by performing extensive analyses of turbulence
data from several field campaigns, wind-tunnel experiments and large-eddy
simulations. Wide range of stabilities, diverse field conditions and a
comprehensive set of turbulence statistics make this study distinct
Sub-grid modelling for two-dimensional turbulence using neural networks
In this investigation, a data-driven turbulence closure framework is
introduced and deployed for the sub-grid modelling of Kraichnan turbulence. The
novelty of the proposed method lies in the fact that snapshots from
high-fidelity numerical data are used to inform artificial neural networks for
predicting the turbulence source term through localized grid-resolved
information. In particular, our proposed methodology successfully establishes a
map between inputs given by stencils of the vorticity and the streamfunction
along with information from two well-known eddy-viscosity kernels. Through this
we predict the sub-grid vorticity forcing in a temporally and spatially dynamic
fashion. Our study is both a-priori and a-posteriori in nature. In the former,
we present an extensive hyper-parameter optimization analysis in addition to
learning quantification through probability density function based validation
of sub-grid predictions. In the latter, we analyse the performance of our
framework for flow evolution in a classical decaying two-dimensional turbulence
test case in the presence of errors related to temporal and spatial
discretization. Statistical assessments in the form of angle-averaged kinetic
energy spectra demonstrate the promise of the proposed methodology for sub-grid
quantity inference. In addition, it is also observed that some measure of
a-posteriori error must be considered during optimal model selection for
greater accuracy. The results in this article thus represent a promising
development in the formalization of a framework for generation of
heuristic-free turbulence closures from data
Modeling of the subgrid-scale term of the filtered magnetic field transport equation
Accurate subgrid-scale turbulence models are needed to perform realistic
numerical magnetohydrodynamic (MHD) simulations of the subsurface flows of the
Sun. To perform large-eddy simulations (LES) of turbulent MHD flows, three
unknown terms have to be modeled. As a first step, this work proposes to use a
priori tests to measure the accuracy of various models proposed to predict the
SGS term appearing in the transport equation of the filtered magnetic field. It
is proposed to evaluate the SGS model accuracy in term of "structural" and
"functional" performance, i.e. the model capacity to locally approximate the
unknown term and to reproduce its energetic action, respectively. From our
tests, it appears that a mixed model based on the scale-similarity model has
better performance.Comment: 10 pages, 5 figures; Center for Turbulence Research, Proceedings of
the Summer Program 2010, Stanford Universit
Stochastic turbulence modeling in RANS simulations via Multilevel Monte Carlo
A multilevel Monte Carlo (MLMC) method for quantifying model-form
uncertainties associated with the Reynolds-Averaged Navier-Stokes (RANS)
simulations is presented. Two, high-dimensional, stochastic extensions of the
RANS equations are considered to demonstrate the applicability of the MLMC
method. The first approach is based on global perturbation of the baseline eddy
viscosity field using a lognormal random field. A more general second extension
is considered based on the work of [Xiao et al.(2017)], where the entire
Reynolds Stress Tensor (RST) is perturbed while maintaining realizability. For
two fundamental flows, we show that the MLMC method based on a hierarchy of
meshes is asymptotically faster than plain Monte Carlo. Additionally, we
demonstrate that for some flows an optimal multilevel estimator can be obtained
for which the cost scales with the same order as a single CFD solve on the
finest grid level.Comment: 40 page
Regularization modeling for large-eddy simulation of homogeneous isotropic decaying turbulence
Inviscid regularization modeling of turbulent flow is investigated. Homogeneous, isotropic, decaying turbulence is simulated at a range of filter widths. A coarse-graining of turbulent flow arises from the direct regularization of the convective nonlinearity in the Navier–Stokes equations. The regularization is translated into its corresponding sub-filter model to close the equations for large-eddy simulation (LES). The accuracy with which primary turbulent flow features are captured by this modeling is investigated for the Leray regularization, the Navier–Stokes-α formulation (NS-α), the simplified Bardina model and a modified Leray approach. On a PDE level, each regularization principle is known to possess a unique, strong solution with known regularity properties. When used as turbulence closure for numerical simulations, significant differences between these models are observed. Through a comparison with direct numerical simulation (DNS) results, a detailed assessment of these regularization principles is made. The regularization models retain much of the small-scale variability in the solution. The smaller resolved scales are dominated by the specific sub-filter model adopted. We find that the Leray model is in general closest to the filtered DNS results, the modified Leray model is found least accurate and the simplified Bardina and NS-α models are in between, as far as accuracy is concerned. This rough ordering is based on the energy decay, the Taylor Reynolds number and the velocity skewness, and on detailed characteristics of the energy dynamics, including spectra of the energy, the energy transfer and the transfer power. At filter widths up to about 10% of the computational domain-size, the Leray and NS-α predictions were found to correlate well with the filtered DNS data. Each of the regularization models underestimates the energy decay rate and overestimates the tail of the energy spectrum. The correspondence with unfiltered DNS spectra was observed often to be closer than with filtered DNS for several of the regularization models
Quantification of errors in large-eddy simulations of a spatially-evolving mixing layer
A stochastic approach based on generalized Polynomial Chaos (gPC) is used to
quantify the error in Large-Eddy Simulation (LES) of a spatially-evolving
mixing layer flow and its sensitivity to different simulation parameters, viz.
the grid stretching in the streamwise and lateral directions and the subgrid
scale model constant (). The error is evaluated with respect to the
results of a highly resolved LES (HRLES) and for different quantities of
interest, namely the mean streamwise velocity, the momentum thickness and the
shear stress. A typical feature of the considered spatially evolving flow is
the progressive transition from a laminar regime, highly dependent on the inlet
conditions, to a fully-developed turbulent one. Therefore the computational
domain is divided in two different zones (\textit{inlet dependent} and
\textit{fully turbulent}) and the gPC error analysis is carried out for these
two zones separately. An optimization of the parameters is also carried out for
both these zones. For all the considered quantities, the results point out that
the error is mainly governed by the value of the constant. At the end of
the inlet-dependent zone a strong coupling between the normal stretching ratio
and the value is observed. The error sensitivity to the parameter values
is significantly larger in the inlet-dependent upstream region; however, low
error values can be obtained in this region for all the considered physical
quantities by an ad-hoc tuning of the parameters. Conversely, in the turbulent
regime the error is globally lower and less sensitive to the parameter
variations, but it is more difficult to find a set of parameter values leading
to optimal results for all the analyzed physical quantities
- …