11 research outputs found
Homogenized lattice Boltzmann methods for fluid flow through porous media -- part I: kinetic model derivation
In this series of studies, we establish homogenized lattice Boltzmann methods
(HLBM) for simulating fluid flow through porous media. Our contributions in
part I are twofold. First, we assemble the targeted partial differential
equation system by formally unifying the governing equations for nonstationary
fluid flow in porous media. A matrix of regularly arranged, equally sized
obstacles is placed into the domain to model fluid flow through porous
structures governed by the incompressible nonstationary Navier--Stokes
equations (NSE). Depending on the ratio of geometric parameters in the matrix
arrangement, several homogenized equations are obtained. We review existing
methods for homogenizing the nonstationary NSE for specific porosities and
discuss the applicability of the resulting model equations. Consequently, the
homogenized NSE are expressed as targeted partial differential equations that
jointly incorporate the derived aspects. Second, we propose a kinetic model,
the homogenized Bhatnagar--Gross--Krook Boltzmann equation, which approximates
the homogenized nonstationary NSE. We formally prove that the zeroth and first
order moments of the kinetic model provide solutions to the mass and momentum
balance variables of the macrocopic model up to specific orders in the scaling
parameter. Based on the present contributions, in the sequel (part II), the
homogenized NSE are consistently approximated by deriving a limit consistent
HLBM discretization of the homogenized Bhatnagar--Gross--Krook Boltzmann
equation
A study on shape-dependent settling of single particles with equal volume using surface resolved simulations
A detailed knowledge of the influence of a particle’s shape on its settling behavior is useful for the prediction and design of separation processes. Models in the available literature usually fit a given function to experimental data. In this work, a constructive and data-driven approach is presented to obtain new drag correlations. To date, the only considered shape parameters are derivatives of the axis lengths and the sphericity. This does not cover all relevant effects, since the process of settling for arbitrarily shaped particles is highly complex. This work extends the list of considered parameters by, e.g., convexity and roundness and evaluates the relevance of each. The aim is to find models describing the drag coefficient and settling velocity, based on this extended set of shape parameters. The data for the investigations are obtained by surface resolved simulations of superellipsoids, applying the homogenized lattice Boltzmann method. To closely study the influence of shape, the particles considered are equal in volume, and therefore cover a range of Reynolds numbers, limited to [9.64, 22.86]. Logistic and polynomial regressions are performed and the quality of the models is investigated with further statistical methods. In addition to the usually studied relation between drag coefficient and Reynolds number, the dependency of the terminal settling velocity on the shape parameters is also investigated. The found models are, with an adjusted coefficient of determination of 0.96 and 0.86, in good agreement with the data, yielding a mean deviation below 5.5% on the training and test dataset
Rigorous derivation of the anelastic approximation to the Oberbeck-Boussinesq equations
International audienceno abstrac
Singular limit of the equations of magnetohydrodynamics in the presence of strong stratification
International audienceno abstrac
Gated Domain Units for Multi-source Domain Generalization
The phenomenon of distribution shift (DS) occurs when a dataset at test time differs from the dataset at training time, which can significantly impair the performance of a machine learning model in practical settings due to a lack of knowledge about the data's distribution at test time. To address this problem, we postulate that real-world distributions are composed of latent Invariant Elementary Distributions (I.E.D) across different domains. This assumption implies an invariant structure in the solution space that enables knowledge transfer to unseen domains. To exploit this property for domain generalization, we introduce a modular neural network layer consisting of Gated Domain Units (GDUs) that learn a representation for each latent elementary distribution. During inference, a weighted ensemble of learning machines can be created by comparing new observations with the representations of each elementary distribution. Our flexible framework also accommodates scenarios where explicit domain information is not present. Extensive experiments on image, text, and graph data show consistent performance improvement on out-of-training target domains. These findings support the practicality of the I.E.D assumption and the effectiveness of GDUs for domain generalisation.ISSN:2835-885