1,017 research outputs found

    Embodied Evolution in Collective Robotics: A Review

    Full text link
    This paper provides an overview of evolutionary robotics techniques applied to on-line distributed evolution for robot collectives -- namely, embodied evolution. It provides a definition of embodied evolution as well as a thorough description of the underlying concepts and mechanisms. The paper also presents a comprehensive summary of research published in the field since its inception (1999-2017), providing various perspectives to identify the major trends. In particular, we identify a shift from considering embodied evolution as a parallel search method within small robot collectives (fewer than 10 robots) to embodied evolution as an on-line distributed learning method for designing collective behaviours in swarm-like collectives. The paper concludes with a discussion of applications and open questions, providing a milestone for past and an inspiration for future research.Comment: 23 pages, 1 figure, 1 tabl

    An adaptive meshfree method for phase-field models of biomembranes. Part II: A Lagrangian approach for membranes in viscous fluids

    Get PDF
    We present a Lagrangian phase-field method to study the low Reynolds number dynamics of vesicles embedded in a viscous fluid. In contrast to previous approaches, where the field variables are the phase-field and the fluid velocity, here we exploit the fact that the phasefield tracks a material interface to reformulate the problem in terms of the Lagrangian motion of a background medium, containing both the biomembrane and the fluid. We discretize the equations in space with maximum-entropy approximants, carefully shown to perform well in phase-field models of biomembranes in a companion paper. The proposed formulation is variational, lending itself to implicit time-stepping algorithms based on minimization of a time-incremental energy, which are automatically nonlinearly stable. The proposed method deals with two of the major challenges in the numerical treatment of coupled fluid/phase-field models of biomembranes, namely the adaptivity of the grid to resolve the sharp features of the phase-field, and the stiffness of the equations, leading to very small time-steps. In our method, local refinement follows the features of the phasefield as both are advected by the Lagrangian motion, and large time-steps can be robustly chosen in the variational time-stepping algorithm, which also lends itself to time adaptivity. The method is presented in the axisymmetric setting, but it can be directly extended to 3D. We present a Lagrangian phase-field method to study the low Reynolds number dynamics of vesicles embedded in a viscous fluid. In contrast to previous approaches, where the field variables are the phase-field and the fluid velocity, here we exploit the fact that the phase-field tracks a material interface to reformulate the problem in terms of the Lagrangian motion of a background medium, containing both the biomembrane and the fluid. We discretize the equations in space with maximum-entropy approximants, carefully shown to perform well in phase-field models of biomembranes in a companion paper. The proposed formulation is variational, lending itself to implicit time-stepping algorithms based on minimization of a time-incremental energy, which are automatically nonlinearly stable. The proposed method deals with two of the major challenges in the numerical treatment of coupled fluid/phase-field models of biomembranes, namely the adaptivity of the grid to resolve the sharp features of the phase-field, and the stiffness of the equations, leading to very small time-steps. In our method, local refinement follows the features of the phase-field as both are advected by the Lagrangian motion, and large time-steps can be robustly chosen in the variational time-stepping algorithm, which also lends itself to time adaptivity. The method is presented in the axisymmetric setting, but it can be directly extended to 3D

    Spectral/hp element methods: recent developments, applications, and perspectives

    Get PDF
    The spectral/hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate C0-continuous expansions. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use the spectral/hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/hp element method in more complex science and engineering applications are discussed

    N-body simulations of gravitational dynamics

    Full text link
    We describe the astrophysical and numerical basis of N-body simulations, both of collisional stellar systems (dense star clusters and galactic centres) and collisionless stellar dynamics (galaxies and large-scale structure). We explain and discuss the state-of-the-art algorithms used for these quite different regimes, attempt to give a fair critique, and point out possible directions of future improvement and development. We briefly touch upon the history of N-body simulations and their most important results.Comment: invited review (28 pages), to appear in European Physics Journal Plu

    Three dimensional thermal-solute phase field simulation of binary alloy solidification

    Get PDF
    We employ adaptive mesh refinement, implicit time stepping, a nonlinear multigrid solver and parallel computation to solve a multi-scale, time dependent, three dimensional, nonlinear set of coupled partial differential equations for three scalar field variables. The mathematical model represents the non-isothermal solidification of a metal alloy into a melt substantially cooled below its freezing point at the microscale. Underlying physical molecular forces are captured at this scale by a specification of the energy field. The time rate of change of the temperature, alloy concentration and an order parameter to govern the state of the material (liquid or solid) are controlled by the diffusion parameters and variational derivatives of the energy functional. The physical problem is important to material scientists for the development of solid metal alloys and, hitherto, this fully coupled thermal problem has not been simulated in three dimensions, due to its computationally demanding nature. By bringing together state of the art numerical techniques this problem is now shown here to be tractable at appropriate resolution with relatively moderate computational resources

    Emerging Linguistic Functions in Early Infancy

    Get PDF
    This paper presents results from experimental studies on early language acquisition in infants and attempts to interpret the experimental results within the framework of the Ecological Theory of Language Acquisition (ETLA) recently proposed by (Lacerda et al., 2004a). From this perspective, the infant’s first steps in the acquisition of the ambient language are seen as a consequence of the infant’s general capacity to represent sensory input and the infant’s interaction with other actors in its immediate ecological environment. On the basis of available experimental evidence, it will be argued that ETLA offers a productive alternative to traditional descriptive views of the language acquisition process by presenting an operative model of how early linguistic function may emerge through interaction

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft
    • 

    corecore