123 research outputs found

    Efficient collective swimming by harnessing vortices through deep reinforcement learning

    Full text link
    Fish in schooling formations navigate complex flow-fields replete with mechanical energy in the vortex wakes of their companions. Their schooling behaviour has been associated with evolutionary advantages including collective energy savings. How fish harvest energy from their complex fluid environment and the underlying physical mechanisms governing energy-extraction during collective swimming, is still unknown. Here we show that fish can improve their sustained propulsive efficiency by actively following, and judiciously intercepting, vortices in the wake of other swimmers. This swimming strategy leads to collective energy-savings and is revealed through the first ever combination of deep reinforcement learning with high-fidelity flow simulations. We find that a `smart-swimmer' can adapt its position and body deformation to synchronise with the momentum of the oncoming vortices, improving its average swimming-efficiency at no cost to the leader. The results show that fish may harvest energy deposited in vortices produced by their peers, and support the conjecture that swimming in formation is energetically advantageous. Moreover, this study demonstrates that deep reinforcement learning can produce navigation algorithms for complex flow-fields, with promising implications for energy savings in autonomous robotic swarms.Comment: 26 pages, 14 figure

    Lattices of hydrodynamically interacting flapping swimmers

    Full text link
    Fish schools and bird flocks exhibit complex collective dynamics whose self-organization principles are largely unknown. The influence of hydrodynamics on such collectives has been relatively unexplored theoretically, in part due to the difficulty in modeling the temporally long-lived hydrodynamic interactions between many dynamic bodies. We address this through a novel discrete-time dynamical system (iterated map) that describes the hydrodynamic interactions between flapping swimmers arranged in one- and two-dimensional lattice formations. Our 1D results exhibit good agreement with previously published experimental data, in particular predicting the bistability of schooling states and new instabilities that can be probed in experimental settings. For 2D lattices, we determine the formations for which swimmers optimally benefit from hydrodynamic interactions. We thus obtain the following hierarchy: while a side-by-side single-row "phalanx" formation offers a small improvement over a solitary swimmer, 1D in-line and 2D rectangular lattice formations exhibit substantial improvements, with the 2D diamond lattice offering the largest hydrodynamic benefit. Generally, our self-consistent modeling framework may be broadly applicable to active systems in which the collective dynamics is primarily driven by a fluid-mediated memory

    Smart Inertial Particles

    Full text link
    We performed a numerical study to train smart inertial particles to target specific flow regions with high vorticity through the use of reinforcement learning algorithms. The particles are able to actively change their size to modify their inertia and density. In short, using local measurements of the flow vorticity, the smart particle explores the interplay between its choices of size and its dynamical behaviour in the flow environment. This allows it to accumulate experience and learn approximately optimal strategies of how to modulate its size in order to reach the target high-vorticity regions. We consider flows with different complexities: a two-dimensional stationary Taylor-Green like configuration, a two-dimensional time-dependent flow, and finally a three-dimensional flow given by the stationary Arnold-Beltrami-Childress helical flow. We show that smart particles are able to learn how to reach extremely intense vortical structures in all the tackled cases.Comment: Published on Phys. Rev. Fluids (August 6, 2018

    Implementation of deep reinforcement learning in OpenFOAM for active flow control

    Get PDF
    Recent advancements in artificial intelligence and machine learning have enabled tackling high-dimensional controlling and decision-making problems. Deep Reinforcement Learning (DRL), as a combination of deep learning and reinforcement learning, can perform immensely complicated cognitive tasks at a superhuman level. DRL can be utilized in fluid mechanics for different purposes, for instance, training an autonomous glider [1], exploring swimming strategies of multiple fish [2, 3], controlling a fluid-directed rigid body [4], proposing shape optimization [5, 6]. DRL can also be utilized for Active Flow Control (AFC) [7], which is of crucial importance for mitigating damaging effects or enhancing favourable consequences of fluid flows. Optimizing the AFC strategy using classical optimization methods is usually a highly non-linear problem and involves designing various parameters, while DRL can learn sophisticated AFCstrategies and fully exploit the abilities of the actuator. It is based on the reinforcement learning concept that explores the state-action-reward sequence and offers a powerful tool for conducting closed-loop feedback control.In the present work, a coupled DRL-CFD framework was developed within OpenFOAM, as opposed to previous attempts in the literature in which the CFD solver was treated as a black box. Here, the DRL agent is implemented as a boundary condition that is able to sense the environment state, perform some action, and record the corresponding reward. Figure 1 displays a simple flowchart of the developed DRL framework in which a deep neural network (DNN) is used as the decision maker (i.e., policy function).To test and verify the performance of the developed DRL-CFD software, the simple test case of vortex shedding behind a 2D cylinder is investigated. The actuator is a pair of synthetic jets on top and bottom of the cylinder. The reward function is defined as the reduction of drag and the absolute value of lift. Thereby, the DRL agent (which is a deep neural network here) learns to minimize the drag and lift coefficients by applying the optimum jet flow at each time step. The DRL agent was trained through a total of 1000 CFD simulations. Figure. 2 presents the variation of drag and lift coefficients of the cylinderfor both cases. The controlling mechanism starts at t = 40 s and it can be seen that both forces reduce significantly. The contours of vorticity behind the cylinder for the uncontrolled (baseline) and controlled cases, after reaching quasi-stationary condition (t = 200 s), are presented in Fig. 3. The vortex shedding effect is considerably reduced in the controlled case

    Drag Reduction in Flows Past 2D and 3D Circular Cylinders Through Deep Reinforcement Learning

    Full text link
    We investigate drag reduction mechanisms in flows past two- and three-dimensional cylinders controlled by surface actuators using deep reinforcement learning. We investigate 2D and 3D flows at Reynolds numbers up to 8,000 and 4,000, respectively. The learning agents are trained in planar flows at various Reynolds numbers, with constraints on the available actuation energy. The discovered actuation policies exhibit intriguing generalization capabilities, enabling open-loop control even for Reynolds numbers beyond their training range. Remarkably, the discovered two-dimensional controls, inducing delayed separation, are transferable to three-dimensional cylinder flows. We examine the trade-offs between drag reduction and energy input while discussing the associated mechanisms. The present work paves the way for control of unsteady separated flows via interpretable control strategies discovered through deep reinforcement learning
    • …
    corecore