118 research outputs found

    On the application of Large-Eddy simulations in engine-related problems

    Get PDF
    In internal combustion engines the combustion process and the pollutants formation are strongly influenced by the fuel-air mixing process. The modeling of the mixing and the underlying turbulent flow field is classically tackled using the Reynolds Averaged Navier Stokes (RANS) modeling method. With the increase of computational power and the development of sophisticated numerical methods the Large Eddy Simulation (LES) method becomes within reach. In LES the turbulent flow is locally filtered in space, rather than fully averaged, as in RANS. This thesis reports on a study where the LES technique is applied to model flow and combustion problems related to engines. Globally, three subjects have been described: the turbulent flow in an engine-like geometry, the turbulent mixing of a gas jet systemand the application of flamelet-basedmethods to LES of two turbulent diffusion flames. Because of our goal to study engine-related flow problems, two relatively practical flow solvers have been selected for the simulations. This choice was motivated by their ability to cope with complex geometries as encountered in realistic, engine-like geometries. A series of simulations of the complex turbulent, swirling and tumbling flow in an engine cylinder, that is induced by the inlet manifold, has been performed with two different LES codes. Additionally one Unsteady RANS simulation has been performed. The flow field statistics from the Large-Eddy simulations deviated substantially between one case and the next. Only global flow features could be captured appropriately. This is due to the impact of the under-resolved shear layer and the dissipative numerical scheme. Their effects have been examined on a square duct flow simulation. An additional sensitivity that was observed concerned the definition of the inflow conditions. Any uncertainty in the mass flow rates at the two runners, that are connected to the cylinder head, greatly influences the remaining flow patterns. To circumvent this problem, a larger part of the upstream flow geometry was included into the computational domain. Nevertheless, the Large-Eddy simulations do give an indication of the unsteady, turbulent processes that take place in an engine, whereas in the URANS simulations all mean flow structures are very weak and the turbulence intensities are predicted relatively low in the complete domain. The turbulent mixing process in gaseous jets has been studied for three different fuel-to-air density ratios. This mimicked the injection of (heavy) fuel into a pressurized chamber. It is shown that the three jets follow well the similarity theory that 152 Abstract was developed for turbulent gas jets. A virtual Schlieren postprocessingmethod has been developed in order to analyze the results similarly as can be done experimentally. By defining the penetration depth based on this method, problems as typically in Schlieren experiments, related to the definition of the cutoff signal intensity have been studied. Additionally it was shown that gaseous jet models can be used to simulate liquid fuel jets, especially at larger penetration depths. This is because the penetration rate from liquid sprays is governed by the entrainment rate, which is similar as for gaseous jets. However, it remains questionable if gas jet models can in all cases replace the model for fuel sprays. The cone angle for gas jets can deviate strongly from those observed in spray experiments. Only when corrected for this effect, the penetration behavior was similar. Two turbulent diffusion flames have been investigated with a focus on the modeling of finite rate chemistry effects. Concerning the first flame, the well known Sandia flame D, two methods are compared to each other for the modeling of the main combustion products and heat release. These methods are described by the classical flamelet method where the non-premixed chemistry is parameterized using a mixture fraction and the scalar dissipation rate, and a relatively new method, where a progress variable is used in non-premixed combustion problems. In the progress variable method two different databases have been compared: one based on non-premixed flamelets and one based premixed flamelets. It is found that the mixture fraction field in the Large-Eddy simulation of Sandia flame D is best predicted by both the classical flamelet method and the progress variable method that is based on premixed chemistry. In these cases the flame solution was mostly located close to its equilibrium value. However, when correcting for the prediction of the mixture fraction in the spatial coordinates, it is shown that the progress variable method based on non-premixed chemistry is better, compared to experiments. Especially at locations where a flame solution near chemical equilibrium is not adequate this model is more appropriate. Additionally a sooting turbulent benzene diffusion flame has been investigated. Therefore a steady laminar flamelet library has been applied which is based on a very detailed reaction mechanism for premixed benzene flames. In the Large-Eddy simulations the total PAH/soot mass and mole fractions have been computed explicitly, while the source terms for these variables are based on a classical flamelet parametrization. The regions of PAH/soot formation have been identified, showing distributed parcels where PAH/soot formation takes place. The results show a growth of PAH/soot volume fraction up to levels of about 4 ppm. The average particle size increases steadily in this flame, up to about 30 nm

    Forecasts and assimilation experiments of the Antarctic ozone hole 2008

    Get PDF
    The 2008 Antarctic ozone hole was one of the largest and most long-lived in recent years. Predictions of the ozone hole were made in near-real time (NRT) and hindcast mode with the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF). The forecasts were carried out both with and without assimilation of satellite observations from multiple instruments to provide more realistic initial conditions. Three different chemistry schemes were applied for the description of stratospheric ozone chemistry: (i) a linearization of the ozone chemistry, (ii) the stratospheric chemical mechanism of the Model of Ozone and Related Chemical Tracers, version 3, (MOZART-3) and (iii) the relaxation to climatology as implemented in the Transport Model, version 5, (TM5). The IFS uses the latter two schemes by means of a two-way coupled system. Without assimilation, the forecasts showed model-specific shortcomings in predicting start time, extent and duration of the ozone hole. The assimilation of satellite observations from the Microwave Limb Sounder (MLS), the Ozone Monitoring Instrument (OMI), the Solar Backscattering Ultraviolet radiometer (SBUV-2) and the SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY (SCIAMACHY) led to a significant improvement of the forecasts when compared with total columns and vertical profiles from ozone sondes. The combined assimilation of observations from multiple instruments helped to overcome limitations of the ultraviolet (UV) sensors at low solar elevation over Antarctica. The assimilation of data from MLS was crucial to obtain a good agreement with the observed ozone profiles both in the polar stratosphere and troposphere. The ozone analyses by the three model configurations were very similar despite the different underlying chemistry schemes. Using ozone analyses as initial conditions had a very beneficial but variable effect on the predictability of the ozone hole over 15 days. The initialized forecasts with the MOZART-3 chemistry produced the best predictions of the increasing ozone hole whereas the linear scheme showed the best results during the ozonehole closure

    Global model simulations of air pollution during the 2003 European heat wave

    Get PDF
    Three global Chemistry Transport Models - MOZART, MOCAGE, and TM5 - as well as MOZART coupled to the IFS meteorological model including assimilation of ozone (O-3) and carbon monoxide (CO) satellite column retrievals, have been compared to surface measurements and MOZAIC vertical profiles in the troposphere over Western/Central Europe for summer 2003. The models reproduce the meteorological features and enhancement of pollution during the period 2-14 August, but not fully the ozone and CO mixing ratios measured during that episode. Modified normalised mean biases are around -25% (except similar to 5% for MOCAGE) in the case of ozone and from -80% to -30% for CO in the boundary layer above Frankfurt. The coupling and assimilation of CO columns from MOPITT overcomes some of the deficiencies in the treatment of transport, chemistry and emissions in MOZART, reducing the negative biases to around 20%. The high reactivity and small dry deposition velocities in MOCAGE seem to be responsible for the overestimation of O-3 in this model. Results from sensitivity simulations indicate that an increase of the horizontal resolution to around 1 degrees x1 degrees and potential uncertainties in European anthropogenic emissions or in long-range transport of pollution cannot completely account for the underestimation of CO and O-3 found for most models. A process-oriented TM5 sensitivity simulation where soil wetness was reduced results in a decrease in dry deposition fluxes and a subsequent ozone increase larger than the ozone changes due to the previous sensitivity runs. However this latest simulation still underestimates ozone during the heat wave and overestimates it outside that period. Most probably, a combination of the mentioned factors together with underrepresented biogenic emissions in the models, uncertainties in the modelling of vertical/horizontal transport processes in the proximity of the boundary layer as well as limitations of the chemistry schemes are responsible for the underestimation of ozone (overestimation in the case of MOCAGE) and CO found in the models during this extreme pollution event

    Global model simulations of air pollution during the 2003 European heat wave

    Get PDF
    Three global Chemistry Transport Models – MOZART, MOCAGE, and TM5 – as well as MOZART coupled to the IFS meteorological model including assimilation of ozone (O<sub>3</sub>) and carbon monoxide (CO) satellite column retrievals, have been compared to surface measurements and MOZAIC vertical profiles in the troposphere over Western/Central Europe for summer 2003. The models reproduce the meteorological features and enhancement of pollution during the period 2–14 August, but not fully the ozone and CO mixing ratios measured during that episode. Modified normalised mean biases are around −25% (except ~5% for MOCAGE) in the case of ozone and from −80% to −30% for CO in the boundary layer above Frankfurt. The coupling and assimilation of CO columns from MOPITT overcomes some of the deficiencies in the treatment of transport, chemistry and emissions in MOZART, reducing the negative biases to around 20%. The high reactivity and small dry deposition velocities in MOCAGE seem to be responsible for the overestimation of O<sub>3</sub> in this model. Results from sensitivity simulations indicate that an increase of the horizontal resolution to around 1°×1° and potential uncertainties in European anthropogenic emissions or in long-range transport of pollution cannot completely account for the underestimation of CO and O<sub>3</sub> found for most models. A process-oriented TM5 sensitivity simulation where soil wetness was reduced results in a decrease in dry deposition fluxes and a subsequent ozone increase larger than the ozone changes due to the previous sensitivity runs. However this latest simulation still underestimates ozone during the heat wave and overestimates it outside that period. Most probably, a combination of the mentioned factors together with underrepresented biogenic emissions in the models, uncertainties in the modelling of vertical/horizontal transport processes in the proximity of the boundary layer as well as limitations of the chemistry schemes are responsible for the underestimation of ozone (overestimation in the case of MOCAGE) and CO found in the models during this extreme pollution event

    Quantifying The Causes of Differences in Tropospheric OH within Global Models

    Get PDF
    The hydroxyl radical (OH) is the primary daytime oxidant in the troposphere and provides the main loss mechanism for many pollutants and greenhouse gases, including methane (CH4). Global mean tropospheric OH differs by as much as 80% among various global models, for reasons that are not well understood. We use neural networks (NNs), trained using archived output from eight chemical transport models (CTMs) that participated in the POLARCAT Model Intercomparison Project (POLMIP), to quantify the factors responsible for differences in tropospheric OH and resulting CH4 lifetime (τCH4) between these models. Annual average τCH4, for loss by OH only, ranges from 8.0–11.6 years for the eight POLMIP CTMs. The factors driving these differences were quantified by inputting 3-D chemical fields from one CTM into the trained NN of another CTM. Across all CTMs, the largest mean differences in τCH4 (ΔτCH4) result from variations in chemical mechanisms (ΔτCH4 = 0.46 years), the photolysis frequency (J) of O3→O(1D) (0.31 years), local O3 (0.30 years), and CO (0.23 years). The ΔτCH4 due to CTM differences in NOx (NO + NO2) is relatively low (0.17 years), though large regional variation in OH between the CTMs is attributed to NOx. Differences in isoprene and J(NO2) have negligible overall effect on globally averaged tropospheric OH, though the extent of OH variations due to each factor depends on the model being examined. This study demonstrates that NNs can serve as a useful tool for quantifying why tropospheric OH varies between global models, provided essential chemical fields are archived

    Biomass burning influence on high-latitude tropospheric ozone and reactive nitrogen in summer 2008: a multi-model analysis based on POLMIP simulations

    Get PDF
    We have evaluated tropospheric ozone enhancement in air dominated by biomass burning emissions at high latitudes (> 50° N) in July 2008, using 10 global chemical transport model simulations from the POLMIP multi-model comparison exercise. In model air masses dominated by fire emissions, ΔO3/ΔCO values ranged between 0.039 and 0.196 ppbv ppbv−1 (mean: 0.113 ppbv ppbv−1) in freshly fire-influenced air, and between 0.140 and 0.261 ppbv ppbv−1 (mean: 0.193 ppbv) in more aged fire-influenced air. These values are in broad agreement with the range of observational estimates from the literature. Model ΔPAN/ΔCO enhancement ratios show distinct groupings according to the meteorological data used to drive the models. ECMWF-forced models produce larger ΔPAN/ΔCO values (4.47 to 7.00 pptv ppbv−1) than GEOS5-forced models (1.87 to 3.28 pptv ppbv−1), which we show is likely linked to differences in efficiency of vertical transport during poleward export from mid-latitude source regions. Simulations of a large plume of biomass burning and anthropogenic emissions exported from towards the Arctic using a Lagrangian chemical transport model show that 4-day net ozone change in the plume is sensitive to differences in plume chemical composition and plume vertical position among the POLMIP models. In particular, Arctic ozone evolution in the plume is highly sensitive to initial concentrations of PAN, as well as oxygenated VOCs (acetone, acetaldehyde), due to their role in producing the peroxyacetyl radical PAN precursor. Vertical displacement is also important due to its effects on the stability of PAN, and subsequent effect on NOx abundance. In plumes where net ozone production is limited, we find that the lifetime of ozone in the plume is sensitive to hydrogen peroxide loading, due to the production of HOx from peroxide photolysis, and the key role of HO2 + O3 in controlling ozone loss. Overall, our results suggest that emissions from biomass burning lead to large-scale photochemical enhancement in high-latitude tropospheric ozone during summer

    Cloud impacts on photochemistry: Building a climatology of photolysis rates from the Atmospheric Tomography mission

    Get PDF
    Abstract. Measurements from actinic flux spectroradiometers on board the NASA DC-8 during the Atmospheric Tomography (ATom) mission provide an extensive set of statistics on how clouds alter photolysis rates (J values) throughout the remote Pacific and Atlantic Ocean basins. J values control tropospheric ozone and methane abundances, and thus clouds have been included for more than three decades in tropospheric chemistry modeling. ATom made four profiling circumnavigations of the troposphere capturing each of the seasons during 2016–2018. This work examines J values from the Pacific Ocean flights of the first deployment, but publishes the complete Atom-1 data set (29 July to 23 August 2016). We compare the observed J values (every 3 s along flight track) with those calculated by nine global chemistry–climate/transport models (globally gridded, hourly, for a mid-August day). To compare these disparate data sets, we build a commensurate statistical picture of the impact of clouds on J values using the ratio of J-cloudy (standard, sometimes cloudy conditions) to J-clear (artificially cleared of clouds). The range of modeled cloud effects is inconsistently large but they fall into two distinct classes: (1) models with large cloud effects showing mostly enhanced J values aloft and or diminished at the surface and (2) models with small effects having nearly clear-sky J values much of the time. The ATom-1 measurements generally favor large cloud effects but are not precise or robust enough to point out the best cloud-modeling approach. The models here have resolutions of 50–200 km and thus reduce the occurrence of clear sky when averaging over grid cells. In situ measurements also average scattered sunlight over a mixed cloud field, but only out to scales of tens of kilometers. A primary uncertainty remains in the role of clouds in chemistry, in particular, how models average over cloud fields, and how such averages can simulate measurements. NERC ACSIS LTSM projec

    Quantifying uncertainties due to chemistry modelling – evaluation of tropospheric composition simulations in the CAMS model (cycle 43R1)

    Get PDF
    We report on an evaluation of tropospheric ozone and its precursor gases in three atmospheric chemistry versions as implemented in the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS), referred to as IFS(CB05BASCOE), IFS(MOZART) and IFS(MOCAGE). While the model versions were forced with the same overall meteorology, emissions, transport and deposition schemes, they vary largely in their parameterisations describing atmospheric chemistry, including the organics degradation, heterogeneous chemistry and photolysis, as well as chemical solver. The model results from the three chemistry versions are compared against a range of aircraft field campaigns, surface observations, ozone-sondes and satellite observations, which provides quantification of the overall model uncertainty driven by the chemistry parameterisations. We find that they produce similar patterns and magnitudes for carbon monoxide (CO) and ozone (O3), as well as a range of non-methane hydrocarbons (NMHCs), with averaged differences for O3 (CO) within 10&thinsp;% (20&thinsp;%) throughout the troposphere. Most of the divergence in the magnitude of CO and NMHCs can be explained by differences in OH concentrations, which can reach up to 50&thinsp;%, particularly at high latitudes. There are also comparatively large discrepancies between model versions for NO2, SO2 and HNO3, which are strongly influenced by secondary chemical production and loss. Other common biases in CO and NMHCs are mainly attributed to uncertainties in their emissions. This configuration of having various chemistry versions within IFS provides a quantification of uncertainties induced by chemistry modelling in the main CAMS global trace gas products beyond those that are constrained by data assimilation.</p

    Global variation in the cost of increasing ecosystem carbon

    Get PDF
    Slowing the reduction, or increasing the accumulation, of organic carbon stored in biomass and soils has been suggested as a potentially rapid and cost-effective method to reduce the rate of atmospheric carbon increase(1). The costs of mitigating climate change by increasing ecosystem carbon relative to the baseline or business-as-usual scenario has been quantified in numerous studies, but results have been contradictory, as both methodological issues and substance differences cause variability(2). Here we show, based on 77 standardized face-to-face interviews of local experts with the best possible knowledge of local land-use economics and sociopolitical context in ten landscapes around the globe, that the estimated cost of increasing ecosystem carbon varied vastly and was perceived to be 16-27 times cheaper in two Indonesian landscapes dominated by peatlands compared with the average of the eight other landscapes. Hence, if reducing emissions from deforestation and forest degradation (REDD+) and other land-use mitigation efforts are to be distributed evenly across forested countries, for example, for the sake of international equity, their overall effectiveness would be dramatically lower than for a cost-minimizing distribution.Peer reviewe
    • …
    corecore