31 research outputs found
Efficient collective swimming by harnessing vortices through deep reinforcement learning
Fish in schooling formations navigate complex flow-fields replete with
mechanical energy in the vortex wakes of their companions. Their schooling
behaviour has been associated with evolutionary advantages including collective
energy savings. How fish harvest energy from their complex fluid environment
and the underlying physical mechanisms governing energy-extraction during
collective swimming, is still unknown. Here we show that fish can improve their
sustained propulsive efficiency by actively following, and judiciously
intercepting, vortices in the wake of other swimmers. This swimming strategy
leads to collective energy-savings and is revealed through the first ever
combination of deep reinforcement learning with high-fidelity flow simulations.
We find that a `smart-swimmer' can adapt its position and body deformation to
synchronise with the momentum of the oncoming vortices, improving its average
swimming-efficiency at no cost to the leader. The results show that fish may
harvest energy deposited in vortices produced by their peers, and support the
conjecture that swimming in formation is energetically advantageous. Moreover,
this study demonstrates that deep reinforcement learning can produce navigation
algorithms for complex flow-fields, with promising implications for energy
savings in autonomous robotic swarms.Comment: 26 pages, 14 figure
Computing the force distribution on the surface of complex, deforming geometries using vortex methods and Brinkman penalization
The distribution of forces on the surface of complex, deforming geometries is
an invaluable output of flow simulations. One particular example of such
geometries involves self-propelled swimmers. Surface forces can provide
significant information about the flow field sensed by the swimmers, and are
difficult to obtain experimentally. At the same time, simulations of flow
around complex, deforming shapes can be computationally prohibitive when
body-fitted grids are used. Alternatively, such simulations may employ
penalization techniques. Penalization methods rely on simple Cartesian grids to
discretize the governing equations, which are enhanced by a penalty term to
account for the boundary conditions. They have been shown to provide a robust
estimation of mean quantities, such as drag and propulsion velocity, but the
computation of surface force distribution remains a challenge. We present a
method for determining flow- induced forces on the surface of both rigid and
deforming bodies, in simulations using re-meshed vortex methods and Brinkman
penalization. The pressure field is recovered from the velocity by solving a
Poisson's equation using the Green's function approach, augmented with a fast
multipole expansion and a tree- code algorithm. The viscous forces are
determined by evaluating the strain-rate tensor on the surface of deforming
bodies, and on a 'lifted' surface in simulations involving rigid objects. We
present results for benchmark flows demonstrating that we can obtain an
accurate distribution of flow-induced surface-forces. The capabilities of our
method are demonstrated using simulations of self-propelled swimmers, where we
obtain the pressure and shear distribution on their deforming surfaces
Flow Navigation by Smart Microswimmers via Reinforcement Learning
Smart active particles can acquire some limited knowledge of the fluid
environment from simple mechanical cues and exert a control on their preferred
steering direction. Their goal is to learn the best way to navigate by
exploiting the underlying flow whenever possible. As an example, we focus our
attention on smart gravitactic swimmers. These are active particles whose task
is to reach the highest altitude within some time horizon, given the
constraints enforced by fluid mechanics. By means of numerical experiments, we
show that swimmers indeed learn nearly optimal strategies just by experience. A
reinforcement learning algorithm allows particles to learn effective strategies
even in difficult situations when, in the absence of control, they would end up
being trapped by flow structures. These strategies are highly nontrivial and
cannot be easily guessed in advance. This Letter illustrates the potential of
reinforcement learning algorithms to model adaptive behavior in complex flows
and paves the way towards the engineering of smart microswimmers that solve
difficult navigation problems.Comment: Published on Physical Review Letters (April 12, 2017
Smart Inertial Particles
We performed a numerical study to train smart inertial particles to target
specific flow regions with high vorticity through the use of reinforcement
learning algorithms. The particles are able to actively change their size to
modify their inertia and density. In short, using local measurements of the
flow vorticity, the smart particle explores the interplay between its choices
of size and its dynamical behaviour in the flow environment. This allows it to
accumulate experience and learn approximately optimal strategies of how to
modulate its size in order to reach the target high-vorticity regions. We
consider flows with different complexities: a two-dimensional stationary
Taylor-Green like configuration, a two-dimensional time-dependent flow, and
finally a three-dimensional flow given by the stationary
Arnold-Beltrami-Childress helical flow. We show that smart particles are able
to learn how to reach extremely intense vortical structures in all the tackled
cases.Comment: Published on Phys. Rev. Fluids (August 6, 2018
Machine Learning for Fluid Mechanics
The field of fluid mechanics is rapidly advancing, driven by unprecedented
volumes of data from field measurements, experiments and large-scale
simulations at multiple spatiotemporal scales. Machine learning offers a wealth
of techniques to extract information from data that could be translated into
knowledge about the underlying fluid mechanics. Moreover, machine learning
algorithms can augment domain knowledge and automate tasks related to flow
control and optimization. This article presents an overview of past history,
current developments, and emerging opportunities of machine learning for fluid
mechanics. It outlines fundamental machine learning methodologies and discusses
their uses for understanding, modeling, optimizing, and controlling fluid
flows. The strengths and limitations of these methods are addressed from the
perspective of scientific inquiry that considers data as an inherent part of
modeling, experimentation, and simulation. Machine learning provides a powerful
information processing framework that can enrich, and possibly even transform,
current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
Scientific multi-agent reinforcement learning for wall-models of turbulent flows
The predictive capabilities of turbulent flow simulations, critical for
aerodynamic design and weather prediction, hinge on the choice of turbulence
models. The abundance of data from experiments and simulations and the advent
of machine learning have provided a boost to these modeling efforts. However,
simulations of turbulent flows remain hindered by the inability of heuristics
and supervised learning to model the near-wall dynamics. We address this
challenge by introducing scientific multi-agent reinforcement learning
(SciMARL) for the discovery of wall models for large-eddy simulations (LES). In
SciMARL, discretization points act also as cooperating agents that learn to
supply the LES closure model. The agents self-learn using limited data and
generalize to extreme Reynolds numbers and previously unseen geometries. The
present simulations reduce by several orders of magnitude the computational
cost over fully-resolved simulations while reproducing key flow quantities. We
believe that SciMARL creates new capabilities for the simulation of turbulent
flows