4,216 research outputs found
CFD Modelling of the Mixture Preparation in a Modern Gasoline Direct Injection Engine and Correlations with Experimental PN Emissions
A detailed 3D CFD analysis of a modern gasoline direct injection (GDI) engine is carried
out to reveal the connections between pre-combustion mixture indicators and PN emissions.
Firstly, a novel calibration methodology is introduced to accurately predict the widely
used characteristics of the high-pressure fuel spray. The methodology utilised the Siemens
STAR-CD 3D CFD software environment and employed a combination of statistical and
optimization methods supported by experimental data. The calibration process identified dominant
factors influencing spray properties and established their optimal levels. The two most
used models for fuel atomisation were investigated. The KelvinâHelmholtz/RayleighâTaylor
(KHâRT) and ReitzâDiwakar (RD) break-up models were calibrated in conjunction with
the RosinâRammler (RR) mono-modal droplet size distribution. RD outperformed KHâRT
in terms of prediction when comparing numerical spray tip penetration and droplet size
characteristics to the experimental counterparts. Then, the modelling protocol incorporated
droplet-wall interaction models and a multi-component surrogate fuel blend model. The
comprehensive digital model was validated using published data and applied to a modern
small-capacity GDI engine. The study explored various engine operating conditions and
highlights the contribution of fuel mal-distribution and liquid film retention at spark timing
to Particle Number (PN) emissions. Finally, a novel surrogate model was developed to
predict the engine-out PN. An extensive CFD analysis was conducted considering part-load
operating conditions and variations of engine control variables. The PN surrogate model
was developed using an Elastic Net (EN) regression technique, establishing relationships
between experimental PN emission levels and modelled, pre-combustion, air-fuel mixture
quality indicators. The approach enabled the reliable prediction of engine sooting tendencies
without relying on complex measurements of combustion characteristics. These research
efforts aim to enhance engine efficiency, reduce emissions, and contribute to the development
of a reliable and cost-effective digital toolset for engine development and diagnostics
Bayesian inference for challenging scientific models
Advances in technology and computation have led to ever more complicated
scientific models of phenomena across a wide variety of fields. Many of these
models present challenges for Bayesian inference, as a result of computationally
intensive likelihoods, high-dimensional parameter spaces or large dataset sizes.
In this thesis we show how we can apply developments in probabilistic machine
learning and statistics to do inference with examples of these types of models.
As a demonstration of an applied inference problem involving a non-trivial
likelihood computation, we show how a combination of optimisation and
MCMC methods along with careful consideration of priors can be used to infer
the parameters of an ODE model of the cardiac action potential.
We then consider the problem of pileup, a phenomenon that occurs in
astronomy when using CCD detectors to observe bright sources. It complicates
the fitting of even simple spectral models by introducing an observation model
with a large number of continuous and discrete latent variables that scales with
the size of the dataset. We develop an MCMC-based method that can work in
the presence of pileup by explicitly marginalising out discrete variables and
using adaptive HMC on the remaining continuous variables. We show with
synthetic experiments that it allows us to fit spectral models in the presence
of pileup without biasing the results. We also compare it to neural Simulation-
Based Inference approaches, and find that they perform comparably to the
MCMC-based approach whilst being able to scale to larger datasets.
As an example of a problem where we wish to do inference with extremely
large datasets, we consider the Extreme Deconvolution method. The method
fits a probability density to a dataset where each observation has Gaussian
noise added with a known sample-specific covariance, originally intended
for use with astronomical datasets. The existing fitting method is batch EM,
which would not normally be applied to large datasets such as the Gaia catalog
containing noisy observations of a billion stars. In this thesis we propose two
minibatch variants of extreme deconvolution, based on an online variation of
the EM algorithm, and direct gradient-based optimisation of the log-likelihood,
both of which can run on GPUs. We demonstrate that these methods provide
faster fitting, whilst being able to scale to much larger models for use with
larger datasets.
We then extend the extreme deconvolution approach to work with non-
Gaussian noise, and to use more flexible density estimators such as normalizing
flows. Since both adjustments lead to an intractable likelihood, we resort to
amortized variational inference in order to fit them. We show that for some
datasets that flows can outperform Gaussian mixtures for extreme deconvolution,
and that fitting with non-Gaussian noise is now possible
Natural and Technological Hazards in Urban Areas
Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Modeling Daily Fantasy Basketball
Daily fantasy basketball presents interesting problems to researchers due to the extensive amounts of data that needs to be explored when trying to predict player performance. A large amount of this data can be noisy due to the variance within the sport of basketball. Because of this, a high degree of skill is required to consistently win in daily fantasy basketball contests. On any given day, users are challenged to predict how players will perform and create a lineup of the eight best players under fixed salary and positional requirements. In this thesis, we present a tool to assist daily fantasy basketball players with these tasks. We explore the use of several machine learning techniques to predict player performance and develop multiple approaches to approximate optimal lineups. We then compare each different heuristic and lineup creation combination, and show that our best combinations perform much better than random lineups. Although creating provably optimal lineups is computationally infeasible, by focusing on players in the Pareto front between performance and cost we can reduce the search space and compute near optimal lineups. Additionally, our greedy and evolutionary lineup search methods offer similar performance at a much smaller computational cost. Our analysis indicates that due to how player salaries are structured, it is generally preferred to construct a lineup consisting of a few stars and filling out the rest of the roster with average to mediocre players than to construct a lineup where all players are expected to perform about the same. Through these findings we hope that our research can serve as a future baseline towards developing an automated or semi-automated tool to optimize daily fantasy basketball
AI-based design methodologies for hot form quench (HFQÂź)
This thesis aims to develop advanced design methodologies that fully exploit the capabilities of the Hot Form Quench (HFQÂź) stamping process in stamping complex geometric features in high-strength aluminium alloy structural components. While previous research has focused on material models for FE simulations, these simulations are not suitable for early-phase design due to their high computational cost and expertise requirements. This project has two main objectives: first, to develop design guidelines for the early-stage design phase; and second, to create a machine learning-based platform that can optimise 3D geometries under hot stamping constraints, for both early and late-stage design. With these methodologies, the aim is to facilitate the incorporation of HFQ capabilities into component geometry design, enabling the full realisation of its benefits.
To achieve the objectives of this project, two main efforts were undertaken. Firstly, the analysis of aluminium alloys for stamping deep corners was simplified by identifying the effects of corner geometry and material characteristics on post-form thinning distribution. New equation sets were proposed to model trends and design maps were created to guide component design at early stages. Secondly, a platform was developed to optimise 3D geometries for stamping, using deep learning technologies to incorporate manufacturing capabilities. This platform combined two neural networks: a geometry generator based on Signed Distance Functions (SDFs), and an image-based manufacturability surrogate model. The platform used gradient-based techniques to update the inputs to the geometry generator based on the surrogate model's manufacturability information. The effectiveness of the platform was demonstrated on two geometry classes, Corners and Bulkheads, with five case studies conducted to optimise under post-stamped thinning constraints. Results showed that the platform allowed for free morphing of complex geometries, leading to significant improvements in component quality.
The research outcomes represent a significant contribution to the field of technologically advanced manufacturing methods and offer promising avenues for future research. The developed methodologies provide practical solutions for designers to identify optimal component geometries, ensuring manufacturing feasibility and reducing design development time and costs. The potential applications of these methodologies extend to real-world industrial settings and can significantly contribute to the continued advancement of the manufacturing sector.Open Acces
Practical Inherently Safer Design Approaches During Early Process Design Stages Aiming for Sustainability
In traditional industrial process design approaches, techno-economic criteria have been the primary objectives in the early process design stages. Safety is often considered only in the later design stage (e.g., detailed engineering stage). Such a traditional approach is that most of the design degrees of freedom, including technology and configuration issues, have already been determined when considering safety. Modifying a process is costly or unreliable at later stages. To solve this issue, there have been numerous attempts to consider process safety during the early design stages in safety engineers and researchers. In particular, special attention to adopting inherently safer design (ISD) has been made because ISD is deemed the most cost-effective risk reduction strategy at early design stages. However, it is still challenging to adopt ISD for process engineers at the early design stages because of the lack of guidance and insufficient information on upcoming process facilities.
To address this challenge, this dissertation consists of three peer-reviewed journal papers [Articles #1 - #3]. With respect to the progress of inherently safer design (in particular, during the early design stage) over the last three decades, Article #1 selects 73 inherent safety assessment tools, which can be utilized during the early design stages, and categorized into three groups: hazard-based inherent safety assessment tools (H-ISATs) for 22 tools, risk-based inherent safety assessment tools (R-ISATs) for 33 tools, and cost-optimal inherent safety assessment tools (CO-ISATs) for 18 tools. The goal of this article is to enable process engineers to use all the available design degrees of freedom to mitigate risk early enough in the design process.
Article #2 analyzes 94 chemical process incidents investigated by the U.S. Chemical Safety and Hazard Investigation Board (CSB) reports. To analyze in a systematic approach, this article proposes 17 incident cause factors, 6 scenario factors, and 6 consequence factors to find out whether ISD would have helped to prevent these incidents.
Article #3 proposes hands-on predictive models of the flash point, the heat of combustion, lower flammability limit (LFL), and upper flammability limit (UFL). By incorporating the nonlinearity and transformation along with linearity of variables, this article constructed practical, reliable regression models thoroughly with readily available variablesâthe number of all atoms, molecular weights, and boiling points. The purpose is to enable a process engineer to quickly obtain hazardous properties of intended process materials
Proceedings of SIRM 2023 - The 15th European Conference on Rotordynamics
It was our great honor and pleasure to host the SIRM Conference after 2003 and 2011 for the third time in Darmstadt. Rotordynamics covers a huge variety of different applications and challenges which are all in the scope of this conference. The conference was opened with a keynote lecture given by Rainer Nordmann, one of the three founders of SIRM âSchwingungen in rotierenden Maschinenâ. In total 53 papers passed our strict review process and were presented. This impressively shows that rotordynamics is relevant as ever. These contributions cover a very wide spectrum of session topics: fluid bearings and seals; air foil bearings; magnetic bearings; rotor blade interaction; rotor fluid interactions; unbalance and balancing; vibrations in turbomachines; vibration control; instability; electrical machines; monitoring, identification and diagnosis; advanced numerical tools and nonlinearities as well as general rotordynamics. The international character of the conference has been significantly enhanced by the Scientific Board since the 14th SIRM resulting on one hand in an expanded Scientific Committee which meanwhile consists of 31 members from 13 different European countries and on the other hand in the new name âEuropean Conference on Rotordynamicsâ. This new international profile has also been
emphasized by participants of the 15th SIRM coming from 17 different countries out of three continents. We experienced a vital discussion and dialogue between industry and academia at the conference where roughly one third of the papers were presented by industry and two thirds by academia being an excellent basis to follow a bidirectional transfer what we call xchange at Technical University of Darmstadt. At this point we also want to give our special thanks to the eleven industry sponsors for their great support of the conference. On behalf of the Darmstadt Local Committee I welcome you to read the papers of the 15th SIRM giving you further insight into the topics and presentations
Analytical validation of innovative magneto-inertial outcomes: a controlled environment study.
peer reviewe
- âŠ