959 research outputs found

    Large Eddy Simulations in Astrophysics

    Get PDF
    In this review, the methodology of large eddy simulations (LES) is introduced and applications in astrophysics are discussed. As theoretical framework, the scale decomposition of the dynamical equations for neutral fluids by means of spatial filtering is explained. For cosmological applications, the filtered equations in comoving coordinates are also presented. To obtain a closed set of equations that can be evolved in LES, several subgrid scale models for the interactions between numerically resolved and unresolved scales are discussed, in particular the subgrid scale turbulence energy equation model. It is then shown how model coefficients can be calculated, either by dynamical procedures or, a priori, from high-resolution data. For astrophysical applications, adaptive mesh refinement is often indispensable. It is shown that the subgrid scale turbulence energy model allows for a particularly elegant and physically well motivated way of preserving momentum and energy conservation in AMR simulations. Moreover, the notion of shear-improved models for inhomogeneous and non-stationary turbulence is introduced. Finally, applications of LES to turbulent combustion in thermonuclear supernovae, star formation and feedback in galaxies, and cosmological structure formation are reviewed.Comment: 64 pages, 23 figures, submitted to Living Reviews in Computational Astrophysic

    Modeling the Pollution of Pristine Gas in the Early Universe

    Get PDF
    We conduct a comprehensive theoretical and numerical investigation of the pollution of pristine gas in turbulent flows, designed to provide new tools for modeling the evolution of the first generation of stars. The properties of such Population III (Pop III) stars are thought to be very different than later generations, because cooling is dramatically different in gas with a metallicity below a critical value Z_c, which lies between ~10^-6 and 10^-3 solar value. Z_c is much smaller than the typical average metallicity, , and thus the mixing efficiency of the pristine gas in the interstellar medium plays a crucial role in the transition from Pop III to normal star formation. The small critical value, Z_c, corresponds to the far left tail of the probability distribution function (PDF) of the metallicity. Based on closure models for the PDF formulation of turbulent mixing, we derive equations for the fraction of gas, P, lying below Z_c, in compressible turbulence. Our simulation data shows that the evolution of the fraction P can be well approximated by a generalized self-convolution model, which predicts dP/dt = -n/tau_con P (1-P^(1/n)), where n is a measure of the locality of the PDF convolution and the timescale tau_con is determined by the rate at which turbulence stretches the pollutants. Using a suite of simulations with Mach numbers ranging from M = 0.9 to 6.2, we provide accurate fits to n and tau_con as a function of M, Z_c/, and the scale, L_p, at which pollutants are added to the flow. For P>0.9, mixing occurs only in the regions surrounding the pollutants, such that n=1. For smaller P, n is larger as mixing becomes more global. We show how the results can be used to construct one-zone models for the evolution of Pop III stars in a single high-redshift galaxy, as well as subgrid models for tracking the evolution of the first stars in large cosmological simulations.Comment: 37 pages, accepted by Ap

    Toward an equivalence criterion for Hybrid RANS/LES methods

    Get PDF
    International audienceA criterion is established to assess the equivalence between hybrid RANS/LES methods, called H-equivalence, based on the modeled energy of the unresolved scales, which leads to similar low-order statistics of the resolved motion. Different equilibrium conditions are considered, and perturbation analyses about the equilibrium states are performed. The procedure is applied to demonstrate the equivalence between two particular hybrid methods, and leads to relationships between hybrid method parameters that control the partitioning of energy between the resolved and unresolved scales of motion. This equivalence is validated by numerical results obtained for the cases of plane and periodically constricted channel flows. This concept of H-equivalence makes it possible to view different hybrid methods as models for the same system of equations: as a consequence, detached-eddy simulation (DES), which is shown to be H-equivalent to the temporal partially integrated transport model (T-PITM) in inhomogeneous, stationary situations, can be interpreted as a model for the subfilter stress involved in the temporally filtered Navier–Stokes equations

    Interacting errors in large-eddy simulation: a review of recent developments

    Get PDF
    The accuracy of large-eddy simulations is limited, among others, by the quality of the subgrid parameterisation and the numerical contamination of the smaller retained flow structures. We review the effects of discretisation and modelling errors from two different perspectives. We first show that spatial discretisation induces its own filter and compare the dynamic importance of this numerical filter to the basic large-eddy filter. The spatial discretisation modifies the large-eddy closure problem as is expressed by the difference between the discrete 'numerical stress tensor' and the continuous 'turbulent stress tensor'. This difference consists of a high-pass contribution associated with the specific numerical filter. Several central differencing methods are analysed and the importance of the subgrid resolution is established. Second, we review a database approach to assess the total simulation error and its numerical and modelling contributions. The interaction between the different sources of error is shown to lead to their partial cancellation. From this analysis one may identify an 'optimal refinement strategy' for a given subgrid model, discretisation method and flow conditions, leading to minimal total simulation error at a given computational cost. We provide full detail for homogeneous decaying turbulence in a 'Smagorinsky fluid'. The optimal refinement strategy is compared with the error reduction that arises from grid refinement of the dynamic eddy-viscosity model. The main trends of the optimal refinement strategy as a function of resolution and Reynolds number are found to be adequately followed by the dynamic model. This yields significant error reduction upon grid refinement although at coarse resolutions significant error levels remain. To address this deficiency, a new successive inverse polynomial interpolation procedure is proposed with which the optimal Smagorinsky constant may be efficiently approximated at a given resolution. The computational overhead of this optimisation procedure is shown to be well justified in view of the achieved reduction of the error level relative to the 'no-model' and dynamic model predictions

    Stochastic representation of the Reynolds transport theorem: revisiting large-scale modeling

    Get PDF
    We explore the potential of a formulation of the Navier-Stokes equations incorporating a random description of the small-scale velocity component. This model, established from a version of the Reynolds transport theorem adapted to a stochastic representation of the flow, gives rise to a large-scale description of the flow dynamics in which emerges an anisotropic subgrid tensor, reminiscent to the Reynolds stress tensor, together with a drift correction due to an inhomogeneous turbulence. The corresponding subgrid model, which depends on the small scales velocity variance, generalizes the Boussinesq eddy viscosity assumption. However, it is not anymore obtained from an analogy with molecular dissipation but ensues rigorously from the random modeling of the flow. This principle allows us to propose several subgrid models defined directly on the resolved flow component. We assess and compare numerically those models on a standard Green-Taylor vortex flow at Reynolds 1600. The numerical simulations, carried out with an accurate divergence-free scheme, outperform classical large-eddies formulations and provides a simple demonstration of the pertinence of the proposed large-scale modeling

    Stochastic climate theory and modeling

    Get PDF
    Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models
    corecore