193 research outputs found

    An unstructured conservative level-set algorithm coupled with dynamic mesh adaptation for the computation of liquid-gas flows

    Get PDF
    International audienceAccurate and efficient simulations of 3D liquid-gas flows are of first importance in many industrial applications, such as fuel injection in aeronautical combustion chambers. In this context, it is mandatory to handle complex geometries. The use of unstructured grids for two-phase flow modeling fulfills this requirement and paves the way to isotropic adaptive mesh refinement. This work presents a narrow-band conservative level-set algorithm implemented in the YALES2 incompressible flow solver, which is combined to dynamic mesh adaptation. This strategy enables resolving the small physical scales at the liquid-gas interface at a moderate cost. It is applied to predicting the outcome of a droplet collision with reflexive separation. In the accurate conservative level set framework, the interface is represented using a hyperbolic tangent profile, which is advected by the fluid, and then reshaped using a reinitialization equation. The classical signed-distance function is reconstructed at nodes in the narrow band around the interface using a geometric projection/marker method (GPMM), to calculate the smallest distance to the interface. The interface normal and curvature are computed using this signed-distance function. Within a mesh cell, the interface is approximated by a segment (2D) or one or several triangles (3D). The distance at the nodes is simply obtained by projection to the closest surface elements. If a node is connected to n elements containing interface fragments, it has a n-marker list (a marker contains the coordinates of the crossing points and the distance). To speed-up the algorithm, the markers stored at each node are sorted based on their distance. Markers are propagated from one band to another: each node compares its markers to its neighbors' and keeps the closest only. The GPMM approach for the reconstruction of the level-set signed-distance function used in conjunction with the reinitialization of Chiodi et al. (2017) leads to significant improvement in the interface quality and overall accuracy compared to the reinitialization of Desjardins et al. (2008) in the calculations performed on unstructured grids. Since the accuracy of the interface normal and curvature directly depends on the signed-distance function reconstruction, less spurious currents occur on the implicit surface. The improved level-set algorithm leads to accurate predictions of the outcome of a droplet collision with reflexive separation, and is validated against the experimental results of Ashgriz et al. (1990). Introduction Two-phase flows are ubiquitous in nature and in industrial systems. The understanding of the various phenomena occurring in liquid-gas flows is crucial for aeronautical combustors, in which a fuel is injected in liquid form, goes under an atomization process, evaporation, mixing with air and eventually combustion. Understanding the atomization process and the resulting droplet distribution is of first importance for aircraft engine performance and operability. The prediction of the atomization process is complex, due to many non-linear phenomena such as interface break-up, droplet convection, or droplet collision. Atomization also involves a wide range of time and space scales, which leads to important calculation costs. Thus, the use of dynamic mesh adaptation for unstructured meshes is particularly helpful for simulating industrial liquid-gas flow problems, as it allows implicit interface dynamics calculation in complex geometries at a reasonable cost [1]. To capture the interface, the conservative level set method is used, which accurately predicts the interface dynamics while conserving liquid mass [2]. This article presents a method to compute the signed-distance function on unstructured grids, and an implementation of the reinitialization of [3], adapted to unstructured meshes. Classic test cases are run to check the overall accuracy and robustness of the method, and a droplet collision case is simulated to validate the global algorithm with a front merging scenario against the experimental results of [4]

    From droplets to particles: Transformation criteria

    Get PDF
    International audienceAtomization of liquid fuel has a direct impact on the production of pollutant emission in engineering propulsion devices. Due to the multiple challenges in experimental investigations, motivation for numerical study is increasing on liquid-gas interaction from injection till dispersed spray zone. Our purpose is to increase the accuracy of the treatment of droplets in atomized jet, which are typically 100 times smaller than the characteristic injection length size. As the characteristic length reduces downstream to the jet, it is increasingly challenging to track the interface of the droplets accurately. To solve this multiscale issue, a coupled tracking Eulerian-Lagrangian Method exists [1]. It consists in transforming the small droplets to Lagrangian droplets that are transported with drag models. In addition to the size transformation criteria, one can consider geometric parameters to determine if a droplet has to be transformed. Indeed, the geometric criteria are there for two reasons. The first one is the case where the droplets can break if there are not spherical. The second one is about the drag models that are based on the assumption that the droplet is spherical. In this paper we make a review of the geometric criteria used in the literature. New geometric criteria are also proposed. Those criteria are validated and then discussed in academic cases and a 3D airblast atomizer simulation. Following the analysis of the results the authors advise the use of the deformation combined with surface criteria as the geometric transformation criteria. Introduction Atomization is a phenomenon encountered in many applications such as sprays in cosmetic engineering or aerospace engineering for jet propulsion [2]. In the combustion chamber, the total surface of the interface separating the two phases is a key parameter. Primary and secondary breakup have been extensively investigated in the literature. However, in order to fully describe the complete process, one has to capture droplets in dispersed zone 100 times smaller than jet diameter. Atomization is then a multiphase and a multiscale flow phenomenon which is still far from being understood. Due to this wide range of scale, the Direct Numerical Simulation (DNS) of such process requires robust and efficient codes. DNS is an important tool to analyse the experimental results and go further into the atomization understanding. In the past few years, numerical schemes of Interface Capturing Method (ICM) have been improved but faced numerical limitation. For instance, the treatment of the small droplets is the most challenging part when the entire process is treated in DNS. When dealing with unresolved structures we face different problems such as the dilution or the creation of numerical instabilities. To avoir them, a strategy is to remove small structures during the simulation, see Shinjo et al. [3]. But, those methods do not collect information on smallest droplets in atomization application. Introduction of Adaptive Mesh Refinement (AMR) in DNS is a first answer to this issue, it consists in refining unresolved area under numerical concept and focus on the interface between two phases instead of refining the entire domain. In dense spray, AMR tends to refine the entire zone and becomes as expensive as a full domain refinement. A solution is to transform the smallest droplets into point particles and remove AMR in this area. This strategy is called Eulerian-Lagrangian coupling [1], it assumes that small droplets will no longer break during the simulation and that the Lagrangian models reproduce correctly the droplet transport. These physical assumptions are implemented to answer numerical issue and improve the computational cost. This Eulerian-Lagrangian coupling is based on transformation criteria that defines when an ICM structure has to be transformed into Lagrangian particle and when a Lagrangian particle has to be transformed back into ICM. The main purpose of the present communication is to provide a detailed analysis of the ICM to Lagrangian transformation criteria. The geometri

    A CONSISTENT MASS AND MOMENTUM FLUX COMPUTATION METHOD USING RUDMAN-TYPE TECHNIQUE WITH A CLSVOF SOLVER

    Get PDF
    ABSTRACT In this paper, a computational method is presented that addresses the problem of multiphase flow characterized by phases with significant density ratio accompanied by strong shearing. The Coupled Level-Set Volume-of-Fluid (CLSVOF) technique is used for interface tracking, while the momentum transfer is coupled to that of mass by means of momentum fluxes computed using a sub-grid. This is an extended adaptation of Rudman's volume tracking techniqu

    Estimating mean and variance of populations abundance in ecology with small-sized samples

    No full text
    En Ă©cologie comme dans bien d’autres domaines, les Ă©chantillons de donnĂ©es de comptage comprennent souvent de nombreux zĂ©ros et quelques abondances fortes. Leur distribution est particuliĂšrement surdispersĂ©e et asymĂ©trique. Les mĂ©thodes les plus classiques d’infĂ©rence sont souvent mal adaptĂ©es Ă  ces distributions, Ă  moins de disposer d’échantillons de trĂšs grande taille. Il est donc nĂ©cessaire de s’interroger sur la validitĂ© des mĂ©thodes d’infĂ©rence, et de quantifier les erreurs d’estimation pour de telles donnĂ©es. Ce travail de thĂšse a ainsi Ă©tĂ© motivĂ© par un jeu de donnĂ©es d’abondance de poissons, correspondant Ă  un Ă©chantillonnage ponctuel par pĂȘche Ă©lectrique. Ce jeu de donnĂ©es comprend plus de 2000 Ă©chantillons, dont chacun correspond aux abondances ponctuelles (considĂ©rĂ©es indĂ©pendantes et identiquement distribuĂ©es) d’une espĂšce pour une campagne de pĂȘche donnĂ©e. Ces Ă©chantillons sont de petite taille (en gĂ©nĂ©ral, 20 _ n _ 50) et comprennent de nombreux zĂ©ros (en tout, 80% de zĂ©ros). Les ajustements de plusieurs modĂšles de distribution classiques pour les donnĂ©es de comptage ont Ă©tĂ© comparĂ©s sur ces Ă©chantillons, et la distribution binomiale nĂ©gative a Ă©tĂ© sĂ©lectionnĂ©e. Nous nous sommes donc intĂ©ressĂ©s Ă  l’estimation des deux paramĂštres de cette distribution : le paramĂštre de moyenne m, et le paramĂštre de dispersion, q. Dans un premier temps, nous avons Ă©tudiĂ© les problĂšmes d’estimation de la dispersion. L’erreur d’estimation est d’autant plus importante que le nombre d’individus observĂ©s est faible, et l’on peut, pour une population donnĂ©e, quantifier le gain en prĂ©cision rĂ©sultant de l’exclusion d’échantillons comprenant trĂšs peu d’individus. Nous avons ensuite comparĂ© plusieurs mĂ©thodes de calcul d’intervalles de confiance pour la moyenne. Les intervalles de confiance basĂ©s sur la vraisemblance du modĂšle binomial nĂ©gatif sont, de loin, prĂ©fĂ©rables Ă  des mĂ©thodes plus classiques comme la mĂ©thode de Student. Par ailleurs, ces deux Ă©tudes ont rĂ©vĂ©lĂ© que certains problĂšmes d’estimation Ă©taient prĂ©visibles, Ă  travers l’observation de statistiques simples des Ă©chantillons comme le nombre total d’individus, ou le nombre de comptages non-nuls. En consĂ©quence, nous avons comparĂ© la mĂ©thode d’échantillonnage Ă  taille fixe, Ă  une mĂ©thode sĂ©quentielle, oĂč l’on Ă©chantillonne jusqu’à observer un nombre minimum d’individus ou un nombre minimum de comptages non-nuls. Nous avons ainsi montrĂ© que l’échantillonnage sĂ©quentiel amĂ©liore l’estimation du paramĂštre de dispersion mais induit un biais dans l’estimation de la moyenne ; nĂ©anmoins, il reprĂ©sente une amĂ©lioration des intervalles de confiance estimĂ©s pour la moyenne. Ainsi, ce travail quantifie les erreurs d’estimation de la moyenne et de la dispersion dans le cas de donnĂ©es de comptage surdispersĂ©es, compare certaines mĂ©thodes d’estimations, et aboutit Ă  des recommandations pratiques en termes de mĂ©thodes d’échantillonnage et d’estimation.In ecology as well as in other scientific areas, count samples often comprise many zeros, and few high abundances. Their distribution is particularly overdispersed, and skewed. The most classical methods of inference are often ill-adapted to these distributions, unless sample size is really large. It is thus necessary to question the validity of inference methods, and to quantify estimation errors for such data. This work has been motivated by a fish abundance dataset, corresponding to punctual sampling by electrofishing. This dataset comprises more than 2000 samples : each sample corresponds to punctual abundances (considered to be independent and identically distributed) for one species and one fishing campaign. These samples are small-sized (generally, 20 _ n _ 50) and comprise many zeros (overall, 80% of counts are zeros). The fits of various classical distribution models were compared on these samples, and the negative binomial distribution was selected. Consequently, we dealt with the estimation of the parameters of this distribution : the parameter of mean m and parameter of dispersion q. First, we studied estimation problems for the dispersion. The estimation error is higher when few individuals are observed, and the gain in precision for a population, resulting from the exclusion of samples comprising very few individuals, can be quantified. We then compared several methods of interval estimation for the mean. Confidence intervals based on negative binomial likelihood are, by far, preferable to more classical ones such as Student’s method. Besides, both studies showed that some estimation problems are predictable through simple statistics such as total number of individuals or number of non-null counts. Accordingly, we compared the fixed sample size sampling method, to a sequential method, where sampling goes on until a minimum number of individuals or positive counts have been observed. We showed that sequential sampling improves the estimation of dispersion but causes the estimation of mean to be biased ; still, it improves the estimation of confidence intervals for the mean. Hence, this work quantifies errors in the estimation of mean and dispersion in the case of overdispersed count data, compares various estimation methods, and leads to practical recommendations as for sampling and estimation methods

    Estimation de la moyenne et de la variance de l’abondance de populations en Ă©cologie Ă  partir d’échantillons de petite taille

    No full text
    In ecology as well as in other scientific areas, count samples often comprise many zeros, and few high abundances. Their distribution is particularly overdispersed, and skewed. The most classical methods of inference are often ill-adapted to these distributions, unless sample size is really large. It is thus necessary to question the validity of inference methods, and to quantify estimation errors for such data. This work has been motivated by a fish abundance dataset, corresponding to punctual sampling by electrofishing. This dataset comprises more than 2000 samples : each sample corresponds to punctual abundances (considered to be independent and identically distributed) for one species and one fishing campaign. These samples are small-sized (generally, 20 _ n _ 50) and comprise many zeros (overall, 80% of counts are zeros). The fits of various classical distribution models were compared on these samples, and the negative binomial distribution was selected. Consequently, we dealt with the estimation of the parameters of this distribution : the parameter of mean m and parameter of dispersion q. First, we studied estimation problems for the dispersion. The estimation error is higher when few individuals are observed, and the gain in precision for a population, resulting from the exclusion of samples comprising very few individuals, can be quantified. We then compared several methods of interval estimation for the mean. Confidence intervals based on negative binomial likelihood are, by far, preferable to more classical ones such as Student’s method. Besides, both studies showed that some estimation problems are predictable through simple statistics such as total number of individuals or number of non-null counts. Accordingly, we compared the fixed sample size sampling method, to a sequential method, where sampling goes on until a minimum number of individuals or positive counts have been observed. We showed that sequential sampling improves the estimation of dispersion but causes the estimation of mean to be biased ; still, it improves the estimation of confidence intervals for the mean. Hence, this work quantifies errors in the estimation of mean and dispersion in the case of overdispersed count data, compares various estimation methods, and leads to practical recommendations as for sampling and estimation methods.En Ă©cologie comme dans bien d’autres domaines, les Ă©chantillons de donnĂ©es de comptage comprennent souvent de nombreux zĂ©ros et quelques abondances fortes. Leur distribution est particuliĂšrement surdispersĂ©e et asymĂ©trique. Les mĂ©thodes les plus classiques d’infĂ©rence sont souvent mal adaptĂ©es Ă  ces distributions, Ă  moins de disposer d’échantillons de trĂšs grande taille. Il est donc nĂ©cessaire de s’interroger sur la validitĂ© des mĂ©thodes d’infĂ©rence, et de quantifier les erreurs d’estimation pour de telles donnĂ©es. Ce travail de thĂšse a ainsi Ă©tĂ© motivĂ© par un jeu de donnĂ©es d’abondance de poissons, correspondant Ă  un Ă©chantillonnage ponctuel par pĂȘche Ă©lectrique. Ce jeu de donnĂ©es comprend plus de 2000 Ă©chantillons, dont chacun correspond aux abondances ponctuelles (considĂ©rĂ©es indĂ©pendantes et identiquement distribuĂ©es) d’une espĂšce pour une campagne de pĂȘche donnĂ©e. Ces Ă©chantillons sont de petite taille (en gĂ©nĂ©ral, 20 _ n _ 50) et comprennent de nombreux zĂ©ros (en tout, 80% de zĂ©ros). Les ajustements de plusieurs modĂšles de distribution classiques pour les donnĂ©es de comptage ont Ă©tĂ© comparĂ©s sur ces Ă©chantillons, et la distribution binomiale nĂ©gative a Ă©tĂ© sĂ©lectionnĂ©e. Nous nous sommes donc intĂ©ressĂ©s Ă  l’estimation des deux paramĂštres de cette distribution : le paramĂštre de moyenne m, et le paramĂštre de dispersion, q. Dans un premier temps, nous avons Ă©tudiĂ© les problĂšmes d’estimation de la dispersion. L’erreur d’estimation est d’autant plus importante que le nombre d’individus observĂ©s est faible, et l’on peut, pour une population donnĂ©e, quantifier le gain en prĂ©cision rĂ©sultant de l’exclusion d’échantillons comprenant trĂšs peu d’individus. Nous avons ensuite comparĂ© plusieurs mĂ©thodes de calcul d’intervalles de confiance pour la moyenne. Les intervalles de confiance basĂ©s sur la vraisemblance du modĂšle binomial nĂ©gatif sont, de loin, prĂ©fĂ©rables Ă  des mĂ©thodes plus classiques comme la mĂ©thode de Student. Par ailleurs, ces deux Ă©tudes ont rĂ©vĂ©lĂ© que certains problĂšmes d’estimation Ă©taient prĂ©visibles, Ă  travers l’observation de statistiques simples des Ă©chantillons comme le nombre total d’individus, ou le nombre de comptages non-nuls. En consĂ©quence, nous avons comparĂ© la mĂ©thode d’échantillonnage Ă  taille fixe, Ă  une mĂ©thode sĂ©quentielle, oĂč l’on Ă©chantillonne jusqu’à observer un nombre minimum d’individus ou un nombre minimum de comptages non-nuls. Nous avons ainsi montrĂ© que l’échantillonnage sĂ©quentiel amĂ©liore l’estimation du paramĂštre de dispersion mais induit un biais dans l’estimation de la moyenne ; nĂ©anmoins, il reprĂ©sente une amĂ©lioration des intervalles de confiance estimĂ©s pour la moyenne. Ainsi, ce travail quantifie les erreurs d’estimation de la moyenne et de la dispersion dans le cas de donnĂ©es de comptage surdispersĂ©es, compare certaines mĂ©thodes d’estimations, et aboutit Ă  des recommandations pratiques en termes de mĂ©thodes d’échantillonnage et d’estimation

    Statistics and fluvial geomorphology

    No full text
    deuxiĂšme Ă©ditionInternational audienceThis chapter reviews statistical tools and illustrates their use to answer geomorphological questions, and also overviews their advantages and limits. Application of statistical tools in fluvial geomorphology has the advantages of reducing subjectivity, eliminating assumptions, facilitating comparison between different spatial and temporal datasets of large sizes and refining data collection. Bivariate statistics, and regressions in particular, have been one of the most popular statistical tools in geomorphology. They focus on the relationship, or correlation, between two variables. Probabilities are useful to generate models in which the variables of interest are categorical, such as indicator variables of events (e.g. occurrence of peak flows). One of the most basic tools dealing with probabilities is the logistic or multinomial models. To develop more realistic descriptions of fluvial morphological systems, process‐response systems, time and space trends and size effects, requires the collection of sufficient data and more thought about their relevance

    Réseaux de villes et processus de recomposition des niveaux : le cas des villes baltiques

    Get PDF
    The numerous studies on global cities published over the last thirty years have failed to consider a large category of cities impacted both by globalization and by the widescale competition between urban projects it engenders. Baltic cities, which are often small urban areas, do not always have sufficient funds and leverage to become directly integrated into global and European processes. However, if they want to attract tourists, investments, and inhabitants, they have to highlight their comparative advantages by implementing active international policies. The Baltic scale may thus provide a stepping stone leading to larger scales. The intense cooperation among municipalities in the Baltic Sea Region since the 1990s and the fall of the Iron Curtain is part of this issue. This has led to the emergence of inequalities between large metropolitan areas which have obtained a very high level of internationalization, and medium-sized cities whose influence is currently limited to the Baltic region alone

    What remains today of pre‐industrial Alpine rivers? Census of historical and current channel patterns in the Alps

    Get PDF
    To date, no survey on the diverse channel patterns existing prior to the major phase of river regulation in the mid‐19th–early 20th century has been elaborated at the scale of the whole European Alps. The present paper fills this knowledge gap. The historical channel forms of the 143 largest Alpine rivers with catchments larger than 500 km2^{2} (total length 11,870 km) were reconstructed based on maps dating from the 1750s to 1900. In the early 19th century, one‐third of the large Alpine rivers were multi‐channel rivers. Single‐bed channels oscillating between close valley sides were also frequent in the Alps (28%). Sinuous and even more so meandering channels were much rarer. Historical river patterns generally followed an upstream–downstream gradient according to slope condition, floodplain width and distance from the sources. The local occurrence of certain channel patterns, however, primarily reflected the tectonic/orographic conditions. Multi‐channel reaches were widespread within the whole Alpine area, alternating with confined and oscillating reaches. This demonstrates that most areas were mainly transport‐limited rather than supply limited. Sinuous and meandering reaches were more frequent in the north‐eastern Alps and were characterized by lower denudation rates and less sediment delivery. Channel straightening caused the loss of about 510 km of river course length, equivalent to 4.3% of the historical extent. Multi‐channel stretches are currently a mere 15% of their historical length, and 45% of the larger Alpine rivers are intensively channelized or have been transformed into reservoirs. Channelization measures differed from one country to another. Human pressures directly affected both local channel geometry and the upstream controls (i.e., sediment supply). Accordingly, individual multi‐channel reaches also evolved into single‐thread channels without any local human interventions

    Assessment of dam impact on longitudinal sequences on in-stream habitats

    No full text
    International audienc
    • 

    corecore