1,922 research outputs found
DeepFlame: A deep learning empowered open-source platform for reacting flow simulations
In this work, we introduce DeepFlame, an open-source C++ platform with the
capabilities of utilising machine learning algorithms and pre-trained models to
solve for reactive flows. We combine the individual strengths of the
computational fluid dynamics library OpenFOAM, machine learning framework
Torch, and chemical kinetics program Cantera. The complexity of cross-library
function and data interfacing (the core of DeepFlame) is minimised to achieve a
simple and clear workflow for code maintenance, extension and upgrading. As a
demonstration, we apply our recent work on deep learning for predicting
chemical kinetics (Zhang et al. Combust. Flame vol. 245 pp. 112319, 2022) to
highlight the potential of machine learning in accelerating reacting flow
simulation. A thorough code validation is conducted via a broad range of
canonical cases to assess its accuracy and efficiency. The results demonstrate
that the convection-diffusion-reaction algorithms implemented in DeepFlame are
robust and accurate for both steady-state and transient processes. In addition,
a number of methods aiming to further improve the computational efficiency,
e.g. dynamic load balancing and adaptive mesh refinement, are explored. Their
performances are also evaluated and reported. With the deep learning method
implemented in this work, a speed-up of two orders of magnitude is achieved in
a simple hydrogen ignition case when performed on a medium-end graphics
processing unit (GPU). Further gain in computational efficiency is expected for
hydrocarbon and other complex fuels. A similar level of acceleration is
obtained on an AI-specific chip - deep computing unit (DCU), highlighting the
potential of DeepFlame in leveraging the next-generation computing architecture
and hardware
Deep Learning for Abstraction, Control and Monitoring of Complex Cyber-Physical Systems
Cyber-Physical Systems (CPS) consist of digital devices that interact with some physical components. Their popularity and complexity are growing exponentially, giving birth to new, previously unexplored, safety-critical application domains. As CPS permeate our daily lives, it becomes imperative
to reason about their reliability. Formal methods provide rigorous techniques for verification, control and synthesis of safe and reliable CPS. However, these methods do not scale with the complexity of the system, thus their applicability to real-world problems is limited. A promising strategy is to leverage deep learning techniques to tackle the scalability issue of formal methods, transforming unfeasible problems into approximately solvable ones. The approximate models are trained over observations which are solutions of the formal problem. In this thesis, we focus on the following tasks, which are computationally challenging: the modeling and the simulation of a complex stochastic model, the design of a safe and robust control policy for a system acting in a highly uncertain environment and the runtime verification problem under full or partial observability. Our approaches, based on deep
learning, are indeed applicable to real-world complex and safety-critical systems acting under strict real-time constraints and in presence of a significant
amount of uncertainty.Cyber-Physical Systems (CPS) consist of digital devices that interact with some physical components. Their popularity and complexity are growing exponentially, giving birth to new, previously unexplored, safety-critical application domains. As CPS permeate our daily lives, it becomes imperative
to reason about their reliability. Formal methods provide rigorous techniques for verification, control and synthesis of safe and reliable CPS. However, these methods do not scale with the complexity of the system, thus their applicability to real-world problems is limited. A promising strategy is to leverage deep learning techniques to tackle the scalability issue of formal methods, transforming unfeasible problems into approximately solvable ones. The approximate models are trained over observations which are solutions of the formal problem. In this thesis, we focus on the following tasks, which are computationally challenging: the modeling and the simulation of a complex stochastic model, the design of a safe and robust control policy for a system acting in a highly uncertain environment and the runtime verification problem under full or partial observability. Our approaches, based on deep
learning, are indeed applicable to real-world complex and safety-critical systems acting under strict real-time constraints and in presence of a significant
amount of uncertainty
Simulation Intelligence: Towards a New Generation of Scientific Methods
The original "Seven Motifs" set forth a roadmap of essential methods for the
field of scientific computing, where a motif is an algorithmic method that
captures a pattern of computation and data movement. We present the "Nine
Motifs of Simulation Intelligence", a roadmap for the development and
integration of the essential algorithms necessary for a merger of scientific
computing, scientific simulation, and artificial intelligence. We call this
merger simulation intelligence (SI), for short. We argue the motifs of
simulation intelligence are interconnected and interdependent, much like the
components within the layers of an operating system. Using this metaphor, we
explore the nature of each layer of the simulation intelligence operating
system stack (SI-stack) and the motifs therein: (1) Multi-physics and
multi-scale modeling; (2) Surrogate modeling and emulation; (3)
Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based
modeling; (6) Probabilistic programming; (7) Differentiable programming; (8)
Open-ended optimization; (9) Machine programming. We believe coordinated
efforts between motifs offers immense opportunity to accelerate scientific
discovery, from solving inverse problems in synthetic biology and climate
science, to directing nuclear energy experiments and predicting emergent
behavior in socioeconomic settings. We elaborate on each layer of the SI-stack,
detailing the state-of-art methods, presenting examples to highlight challenges
and opportunities, and advocating for specific ways to advance the motifs and
the synergies from their combinations. Advancing and integrating these
technologies can enable a robust and efficient hypothesis-simulation-analysis
type of scientific method, which we introduce with several use-cases for
human-machine teaming and automated science
A Behavioral Approach to Robust Machine Learning
Machine learning is revolutionizing almost all fields of science and technology and has been proposed as a pathway to solving many previously intractable problems such as autonomous driving and other complex robotics tasks. While the field has demonstrated impressive results on certain problems, many of these results have not translated to applications in physical systems, partly due to the cost of system fail-
ure and partly due to the difficulty of ensuring reliable and robust model behavior. Deep neural networks, for instance, have simultaneously demonstrated both incredible performance in game playing and image processing, and remarkable fragility. This combination of high average performance and a catastrophically bad worst case performance presents a serious danger as deep neural networks are currently being
used in safety critical tasks such as assisted driving.
In this thesis, we propose a new approach to training models that have built in robustness guarantees. Our approach to ensuring stability and robustness of the models trained is distinct from prior methods; where prior methods learn a model and then attempt to verify robustness/stability, we directly optimize over sets of
models where the necessary properties are known to hold.
Specifically, we apply methods from robust and nonlinear control to the analysis and synthesis of recurrent neural networks, equilibrium neural networks, and recurrent equilibrium neural networks. The techniques developed allow us to enforce properties such as incremental stability, incremental passivity, and incremental l2 gain bounds / Lipschitz bounds. A central consideration in the development of our model sets is the difficulty of fitting models. All models can be placed in the image of a convex set, or even R^N , allowing useful properties to be easily imposed during the training procedure via simple interior point methods, penalty methods, or unconstrained optimization.
In the final chapter, we study the problem of learning networks of interacting models with guarantees that the resulting networked system is stable and/or monotone, i.e., the order relations between states are preserved. While our approach to learning in this chapter is similar to the previous chapters, the model set that we propose has a separable structure that allows for the scalable and distributed identification of large-scale systems via the alternating directions method of multipliers (ADMM)
A semi-incremental model order reduction approach for fatigue damage computations
Nowadays, there is an increasing need, and interest, in model order reduction (MOR) techniques that make it feasible to approximate complex high fidelity models (HFM) in real-time and many-query scenarios using limited computational resources. The development of model order reduction techniques suitable for structural problems with nonlinear material behaviour is investigated in this research. A semi-incremental framework based on a large time increment (LATIN) approach is proposed to tackle fatigue damage computations subjected to variable amplitude and frequency loadings. Due to the nonlinear damage growth, the damage accumulation driven by variable loads should reflect the load sequence effect. Experiments have revealed that such an effect becomes more crucial as the difference in amplitudes increases \parencite{lemaitre2005engineering}.
%ignore: experiments show that the deviation is more and more important
The proposed implementation approximates the structural response within a material-independent framework, i.e., different material models may be incorporated straightforwardly. Loads with variable amplitudes and frequencies are addressed in a semi-incremental manner, where full cycles are simulated consecutively, and convergence is ensured using a hybrid approach.
A low-rank approximation, in terms of proper generalised decomposition (PGD) of the solution, is sought directly in the online phase of the proposed scheme where the optimality of the generated PGD basis and its growth are controlled using different orthogonalisation schemes. PGD bases can be interpreted as a set of linear subspaces adapted on-the-fly to the problem settings. Different orthonormalisation techniques were tested to ensure the optimality of the PGD generated modes. Following the assessment, a randomised singular value decomposition (SVD) approach that exploits the outer-product format of the PGD solution was selected. The SVD scheme resulted in a considerable computational time saving by limiting the number of modes compared to a Gram-Schmidt procedure. The whole numerical scheme is realised in the online phase, and no offline phase is considered so far.
Improvements to the introduced reduced order model (ROM) are further investigated by exploiting low-rank approximations in an arithmetic operation toolbox that allows for faster simulations with lower memory footprints. Then, a data assisted approach that combines machine learning techniques such as artificial neural networks (ANN) with MOR is examined to show the promising results of data recycling, i.e., reusing previously generated data.
The semi-incremental scheme and a displacement formulated standard finite element incremental framework are implemented to illustrate their differences in terms of computational time and memory footprint. Numerical examples with variable loadings that show speedups in the order of 10-100 are discussed, and a typical implementation is provided as open-source code, available at https://gitlab.com/shadialameddin/romfem
Uncertainty Quantification in Machine Learning for Engineering Design and Health Prognostics: A Tutorial
On top of machine learning models, uncertainty quantification (UQ) functions
as an essential layer of safety assurance that could lead to more principled
decision making by enabling sound risk assessment and management. The safety
and reliability improvement of ML models empowered by UQ has the potential to
significantly facilitate the broad adoption of ML solutions in high-stakes
decision settings, such as healthcare, manufacturing, and aviation, to name a
few. In this tutorial, we aim to provide a holistic lens on emerging UQ methods
for ML models with a particular focus on neural networks and the applications
of these UQ methods in tackling engineering design as well as prognostics and
health management problems. Toward this goal, we start with a comprehensive
classification of uncertainty types, sources, and causes pertaining to UQ of ML
models. Next, we provide a tutorial-style description of several
state-of-the-art UQ methods: Gaussian process regression, Bayesian neural
network, neural network ensemble, and deterministic UQ methods focusing on
spectral-normalized neural Gaussian process. Established upon the mathematical
formulations, we subsequently examine the soundness of these UQ methods
quantitatively and qualitatively (by a toy regression example) to examine their
strengths and shortcomings from different dimensions. Then, we review
quantitative metrics commonly used to assess the quality of predictive
uncertainty in classification and regression problems. Afterward, we discuss
the increasingly important role of UQ of ML models in solving challenging
problems in engineering design and health prognostics. Two case studies with
source codes available on GitHub are used to demonstrate these UQ methods and
compare their performance in the life prediction of lithium-ion batteries at
the early stage and the remaining useful life prediction of turbofan engines
- …