3,081 research outputs found
Machine Learning for Fluid Mechanics
The field of fluid mechanics is rapidly advancing, driven by unprecedented
volumes of data from field measurements, experiments and large-scale
simulations at multiple spatiotemporal scales. Machine learning offers a wealth
of techniques to extract information from data that could be translated into
knowledge about the underlying fluid mechanics. Moreover, machine learning
algorithms can augment domain knowledge and automate tasks related to flow
control and optimization. This article presents an overview of past history,
current developments, and emerging opportunities of machine learning for fluid
mechanics. It outlines fundamental machine learning methodologies and discusses
their uses for understanding, modeling, optimizing, and controlling fluid
flows. The strengths and limitations of these methods are addressed from the
perspective of scientific inquiry that considers data as an inherent part of
modeling, experimentation, and simulation. Machine learning provides a powerful
information processing framework that can enrich, and possibly even transform,
current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
A Reinforcement Learning Approach for Transient Control of Liquid Rocket Engines
Nowadays, liquid rocket engines use closed-loop control at most near steady
operating conditions. The control of the transient phases is traditionally
performed in open-loop due to highly nonlinear system dynamics. This situation
is unsatisfactory, in particular for reusable engines. The open-loop control
system cannot provide optimal engine performance due to external disturbances
or the degeneration of engine components over time. In this paper, we study a
deep reinforcement learning approach for optimal control of a generic
gas-generator engine's continuous start-up phase. It is shown that the learned
policy can reach different steady-state operating points and convincingly adapt
to changing system parameters. A quantitative comparison with carefully tuned
open-loop sequences and PID controllers is included. The deep reinforcement
learning controller achieves the highest performance and requires only minimal
computational effort to calculate the control action, which is a big advantage
over approaches that require online optimization, such as model predictive
control. control
A Benchmark Environment Motivated by Industrial Control Problems
In the research area of reinforcement learning (RL), frequently novel and
promising methods are developed and introduced to the RL community. However,
although many researchers are keen to apply their methods on real-world
problems, implementing such methods in real industry environments often is a
frustrating and tedious process. Generally, academic research groups have only
limited access to real industrial data and applications. For this reason, new
methods are usually developed, evaluated and compared by using artificial
software benchmarks. On one hand, these benchmarks are designed to provide
interpretable RL training scenarios and detailed insight into the learning
process of the method on hand. On the other hand, they usually do not share
much similarity with industrial real-world applications. For this reason we
used our industry experience to design a benchmark which bridges the gap
between freely available, documented, and motivated artificial benchmarks and
properties of real industrial problems. The resulting industrial benchmark (IB)
has been made publicly available to the RL community by publishing its Java and
Python code, including an OpenAI Gym wrapper, on Github. In this paper we
motivate and describe in detail the IB's dynamics and identify prototypic
experimental settings that capture common situations in real-world industry
control problems
A Benchmark Environment Motivated by Industrial Control Problems
In the research area of reinforcement learning (RL), frequently novel and
promising methods are developed and introduced to the RL community. However,
although many researchers are keen to apply their methods on real-world
problems, implementing such methods in real industry environments often is a
frustrating and tedious process. Generally, academic research groups have only
limited access to real industrial data and applications. For this reason, new
methods are usually developed, evaluated and compared by using artificial
software benchmarks. On one hand, these benchmarks are designed to provide
interpretable RL training scenarios and detailed insight into the learning
process of the method on hand. On the other hand, they usually do not share
much similarity with industrial real-world applications. For this reason we
used our industry experience to design a benchmark which bridges the gap
between freely available, documented, and motivated artificial benchmarks and
properties of real industrial problems. The resulting industrial benchmark (IB)
has been made publicly available to the RL community by publishing its Java and
Python code, including an OpenAI Gym wrapper, on Github. In this paper we
motivate and describe in detail the IB's dynamics and identify prototypic
experimental settings that capture common situations in real-world industry
control problems
Improving aircraft performance using machine learning: a review
This review covers the new developments in machine learning (ML) that are
impacting the multi-disciplinary area of aerospace engineering, including
fundamental fluid dynamics (experimental and numerical), aerodynamics,
acoustics, combustion and structural health monitoring. We review the state of
the art, gathering the advantages and challenges of ML methods across different
aerospace disciplines and provide our view on future opportunities. The basic
concepts and the most relevant strategies for ML are presented together with
the most relevant applications in aerospace engineering, revealing that ML is
improving aircraft performance and that these techniques will have a large
impact in the near future
Machine Learning Methods for the Design and Operation of Liquid Rocket Engines -- Research Activities at the DLR Institute of Space Propulsion
The last years have witnessed an enormous interest in the use of artificial
intelligence methods, especially machine learning algorithms. This also has a
major impact on aerospace engineering in general, and the design and operation
of liquid rocket engines in particular, and research in this area is growing
rapidly. The paper describes current machine learning applications at the DLR
Institute of Space Propulsion. Not only applications in the field of modeling
are presented, but also convincing results that prove the capabilities of
machine learning methods for control and condition monitoring are described in
detail. Furthermore, the advantages and disadvantages of the presented methods
as well as current and future research directions are discussed.Comment: Submitted as conference paper to the Space Propulsion 2020+1
Conferenc
- …